Improve model card with Github link and clarify paper link
#14
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,19 +1,20 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: other
|
3 |
license_name: apache-2.0
|
4 |
license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE
|
5 |
-
|
6 |
-
- en
|
7 |
tags:
|
8 |
- multimodal
|
9 |
-
library_name: transformers
|
10 |
-
pipeline_tag: any-to-any
|
11 |
---
|
12 |
|
13 |
# Qwen2.5-Omni
|
14 |
-
|
15 |
-
|
16 |
-
|
|
|
17 |
|
18 |
|
19 |
## OverView
|
@@ -26,9 +27,9 @@ Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse moda
|
|
26 |
|
27 |
### Key Features
|
28 |
|
29 |
-
* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We
|
30 |
|
31 |
-
* **Real-Time Voice and Video Chat**: Architecture
|
32 |
|
33 |
* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.
|
34 |
|
@@ -52,835 +53,5 @@ We conducted a comprehensive evaluation of Qwen2.5-Omni, which demonstrates stro
|
|
52 |
|
53 |
<details>
|
54 |
<summary>Multimodality -> Text</summary>
|
55 |
-
|
56 |
-
|
57 |
-
<tr>
|
58 |
-
<th class="tg-0lax">Datasets</th>
|
59 |
-
<th class="tg-0lax">Model</th>
|
60 |
-
<th class="tg-0lax">Performance</th>
|
61 |
-
</tr></thead>
|
62 |
-
<tbody>
|
63 |
-
<tr>
|
64 |
-
<td class="tg-0lax" rowspan="10">OmniBench<br>Speech | Sound Event | Music | Avg</td>
|
65 |
-
<td class="tg-0lax">Gemini-1.5-Pro</td>
|
66 |
-
<td class="tg-0lax">42.67%|42.26%|46.23%|42.91%</td>
|
67 |
-
</tr>
|
68 |
-
<tr>
|
69 |
-
<td class="tg-0lax">MIO-Instruct</td>
|
70 |
-
<td class="tg-0lax">36.96%|33.58%|11.32%|33.80%</td>
|
71 |
-
</tr>
|
72 |
-
<tr>
|
73 |
-
<td class="tg-0lax">AnyGPT (7B)</td>
|
74 |
-
<td class="tg-0lax">17.77%|20.75%|13.21%|18.04%</td>
|
75 |
-
</tr>
|
76 |
-
<tr>
|
77 |
-
<td class="tg-0lax">video-SALMONN</td>
|
78 |
-
<td class="tg-0lax">34.11%|31.70%|<strong>56.60%</strong>|35.64%</td>
|
79 |
-
</tr>
|
80 |
-
<tr>
|
81 |
-
<td class="tg-0lax">UnifiedIO2-xlarge</td>
|
82 |
-
<td class="tg-0lax">39.56%|36.98%|29.25%|38.00%</td>
|
83 |
-
</tr>
|
84 |
-
<tr>
|
85 |
-
<td class="tg-0lax">UnifiedIO2-xxlarge</td>
|
86 |
-
<td class="tg-0lax">34.24%|36.98%|24.53%|33.98%</td>
|
87 |
-
</tr>
|
88 |
-
<tr>
|
89 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
90 |
-
<td class="tg-0lax">-|-|-|40.50%</td>
|
91 |
-
</tr>
|
92 |
-
<tr>
|
93 |
-
<td class="tg-0lax">Baichuan-Omni-1.5</td>
|
94 |
-
<td class="tg-0lax">-|-|-|42.90%</td>
|
95 |
-
</tr>
|
96 |
-
<tr>
|
97 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
98 |
-
<td class="tg-0lax"><strong>55.25%</strong>|<strong>60.00%</strong>|52.83%|<strong>56.13%</strong></td>
|
99 |
-
</tr>
|
100 |
-
</tbody></table>
|
101 |
-
</details>
|
102 |
-
|
103 |
-
|
104 |
-
<details>
|
105 |
-
<summary>Audio -> Text</summary>
|
106 |
-
|
107 |
-
|
108 |
-
<table class="tg"><thead>
|
109 |
-
<tr>
|
110 |
-
<th class="tg-0lax">Datasets</th>
|
111 |
-
<th class="tg-0lax">Model</th>
|
112 |
-
<th class="tg-0lax">Performance</th>
|
113 |
-
</tr></thead>
|
114 |
-
<tbody>
|
115 |
-
<tr>
|
116 |
-
<td class="tg-9j4x" colspan="3">ASR</td>
|
117 |
-
</tr>
|
118 |
-
<tr>
|
119 |
-
<td class="tg-0lax" rowspan="11">Librispeech<br>dev-clean | dev other | test-clean | test-other</td>
|
120 |
-
<td class="tg-0lax">SALMONN</td>
|
121 |
-
<td class="tg-0lax">-|-|2.1|4.9</td>
|
122 |
-
</tr>
|
123 |
-
<tr>
|
124 |
-
<td class="tg-0lax">SpeechVerse</td>
|
125 |
-
<td class="tg-0lax">-|-|2.1|4.4</td>
|
126 |
-
</tr>
|
127 |
-
<tr>
|
128 |
-
<td class="tg-0lax">Whisper-large-v3</td>
|
129 |
-
<td class="tg-0lax">-|-|1.8|3.6</td>
|
130 |
-
</tr>
|
131 |
-
<tr>
|
132 |
-
<td class="tg-0lax">Llama-3-8B</td>
|
133 |
-
<td class="tg-0lax">-|-|-|3.4</td>
|
134 |
-
</tr>
|
135 |
-
<tr>
|
136 |
-
<td class="tg-0lax">Llama-3-70B</td>
|
137 |
-
<td class="tg-0lax">-|-|-|3.1</td>
|
138 |
-
</tr>
|
139 |
-
<tr>
|
140 |
-
<td class="tg-0lax">Seed-ASR-Multilingual</td>
|
141 |
-
<td class="tg-0lax">-|-|<strong>1.6</strong>|<strong>2.8</strong></td>
|
142 |
-
</tr>
|
143 |
-
<tr>
|
144 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
145 |
-
<td class="tg-0lax">-|-|1.7|-</td>
|
146 |
-
</tr>
|
147 |
-
<tr>
|
148 |
-
<td class="tg-0lax">MinMo</td>
|
149 |
-
<td class="tg-0lax">-|-|1.7|3.9</td>
|
150 |
-
</tr>
|
151 |
-
<tr>
|
152 |
-
<td class="tg-0lax">Qwen-Audio</td>
|
153 |
-
<td class="tg-0lax">1.8|4.0|2.0|4.2</td>
|
154 |
-
</tr>
|
155 |
-
<tr>
|
156 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
157 |
-
<td class="tg-0lax"><strong>1.3</strong>|<strong>3.4</strong>|<strong>1.6</strong>|3.6</td>
|
158 |
-
</tr>
|
159 |
-
<tr>
|
160 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
161 |
-
<td class="tg-0lax">1.6|3.5|1.8|3.4</td>
|
162 |
-
</tr>
|
163 |
-
<tr>
|
164 |
-
<td class="tg-0lax" rowspan="4">Common Voice 15<br>en | zh | yue | fr</td>
|
165 |
-
<td class="tg-0lax">Whisper-large-v3</td>
|
166 |
-
<td class="tg-0lax">9.3|12.8|10.9|10.8</td>
|
167 |
-
</tr>
|
168 |
-
<tr>
|
169 |
-
<td class="tg-0lax">MinMo</td>
|
170 |
-
<td class="tg-0lax">7.9|6.3|6.4|8.5</td>
|
171 |
-
</tr>
|
172 |
-
<tr>
|
173 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
174 |
-
<td class="tg-0lax">8.6|6.9|<strong>5.9</strong>|9.6</td>
|
175 |
-
</tr>
|
176 |
-
<tr>
|
177 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
178 |
-
<td class="tg-0lax"><strong>7.6</strong>|<strong>5.2</strong>|7.3|<strong>7.5</strong></td>
|
179 |
-
</tr>
|
180 |
-
<tr>
|
181 |
-
<td class="tg-0lax" rowspan="7">Fleurs<br>zh | en</td>
|
182 |
-
<td class="tg-0lax">Whisper-large-v3</td>
|
183 |
-
<td class="tg-0lax">7.7|4.1</td>
|
184 |
-
</tr>
|
185 |
-
<tr>
|
186 |
-
<td class="tg-0lax">Seed-ASR-Multilingual</td>
|
187 |
-
<td class="tg-0lax">-|<strong>3.4</strong></td>
|
188 |
-
</tr>
|
189 |
-
<tr>
|
190 |
-
<td class="tg-0lax">Megrez-3B-Omni</td>
|
191 |
-
<td class="tg-0lax">10.8|-</td>
|
192 |
-
</tr>
|
193 |
-
<tr>
|
194 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
195 |
-
<td class="tg-0lax">4.4|-</td>
|
196 |
-
</tr>
|
197 |
-
<tr>
|
198 |
-
<td class="tg-0lax">MinMo</td>
|
199 |
-
<td class="tg-0lax">3.0|3.8</td>
|
200 |
-
</tr>
|
201 |
-
<tr>
|
202 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
203 |
-
<td class="tg-0lax">7.5|-</td>
|
204 |
-
</tr>
|
205 |
-
<tr>
|
206 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
207 |
-
<td class="tg-0lax"><strong>3.0</strong>|4.1</td>
|
208 |
-
</tr>
|
209 |
-
<tr>
|
210 |
-
<td class="tg-0lax" rowspan="5">Wenetspeech<br>test-net | test-meeting</td>
|
211 |
-
<td class="tg-0lax">Seed-ASR-Chinese</td>
|
212 |
-
<td class="tg-0lax"><strong>4.7|5.7</strong></td>
|
213 |
-
</tr>
|
214 |
-
<tr>
|
215 |
-
<td class="tg-0lax">Megrez-3B-Omni</td>
|
216 |
-
<td class="tg-0lax">-|16.4</td>
|
217 |
-
</tr>
|
218 |
-
<tr>
|
219 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
220 |
-
<td class="tg-0lax">6.9|-</td>
|
221 |
-
</tr>
|
222 |
-
<tr>
|
223 |
-
<td class="tg-0lax">MinMo</td>
|
224 |
-
<td class="tg-0lax">6.8|7.4</td>
|
225 |
-
</tr>
|
226 |
-
<tr>
|
227 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
228 |
-
<td class="tg-0lax">5.9|7.7</td>
|
229 |
-
</tr>
|
230 |
-
<tr>
|
231 |
-
<td class="tg-0lax" rowspan="3">Voxpopuli-V1.0-en</td>
|
232 |
-
<td class="tg-0lax">Llama-3-8B</td>
|
233 |
-
<td class="tg-0lax">6.2</td>
|
234 |
-
</tr>
|
235 |
-
<tr>
|
236 |
-
<td class="tg-0lax">Llama-3-70B</td>
|
237 |
-
<td class="tg-0lax"><strong>5.7</strong></td>
|
238 |
-
</tr>
|
239 |
-
<tr>
|
240 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
241 |
-
<td class="tg-0lax">5.8</td>
|
242 |
-
</tr>
|
243 |
-
<tr>
|
244 |
-
<td class="tg-9j4x" colspan="3">S2TT</td>
|
245 |
-
</tr>
|
246 |
-
<tr>
|
247 |
-
<td class="tg-0lax" rowspan="8">CoVoST2<br>en-de | de-en | en-zh | zh-en</td>
|
248 |
-
<td class="tg-0lax">SALMONN</td>
|
249 |
-
<td class="tg-0lax">18.6|-|33.1|-</td>
|
250 |
-
</tr>
|
251 |
-
<tr>
|
252 |
-
<td class="tg-0lax">SpeechLLaMA</td>
|
253 |
-
<td class="tg-0lax">-|27.1|-|12.3</td>
|
254 |
-
</tr>
|
255 |
-
<tr>
|
256 |
-
<td class="tg-0lax">BLSP</td>
|
257 |
-
<td class="tg-0lax">14.1|-|-|-</td>
|
258 |
-
</tr>
|
259 |
-
<tr>
|
260 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
261 |
-
<td class="tg-0lax">-|-|<strong>48.2</strong>|27.2</td>
|
262 |
-
</tr>
|
263 |
-
<tr>
|
264 |
-
<td class="tg-0lax">MinMo</td>
|
265 |
-
<td class="tg-0lax">-|<strong>39.9</strong>|46.7|26.0</td>
|
266 |
-
</tr>
|
267 |
-
<tr>
|
268 |
-
<td class="tg-0lax">Qwen-Audio</td>
|
269 |
-
<td class="tg-0lax">25.1|33.9|41.5|15.7</td>
|
270 |
-
</tr>
|
271 |
-
<tr>
|
272 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
273 |
-
<td class="tg-0lax">29.9|35.2|45.2|24.4</td>
|
274 |
-
</tr>
|
275 |
-
<tr>
|
276 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
277 |
-
<td class="tg-0lax"><strong>30.2</strong>|37.7|41.4|<strong>29.4</strong></td>
|
278 |
-
</tr>
|
279 |
-
<tr>
|
280 |
-
<td class="tg-9j4x" colspan="3">SER</td>
|
281 |
-
</tr>
|
282 |
-
<tr>
|
283 |
-
<td class="tg-0lax" rowspan="5">Meld</td>
|
284 |
-
<td class="tg-0lax">WavLM-large</td>
|
285 |
-
<td class="tg-0lax">0.542</td>
|
286 |
-
</tr>
|
287 |
-
<tr>
|
288 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
289 |
-
<td class="tg-0lax">0.524</td>
|
290 |
-
</tr>
|
291 |
-
<tr>
|
292 |
-
<td class="tg-0lax">Qwen-Audio</td>
|
293 |
-
<td class="tg-0lax">0.557</td>
|
294 |
-
</tr>
|
295 |
-
<tr>
|
296 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
297 |
-
<td class="tg-0lax">0.553</td>
|
298 |
-
</tr>
|
299 |
-
<tr>
|
300 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
301 |
-
<td class="tg-0lax"><strong>0.570</strong></td>
|
302 |
-
</tr>
|
303 |
-
<tr>
|
304 |
-
<td class="tg-9j4x" colspan="3">VSC</td>
|
305 |
-
</tr>
|
306 |
-
<tr>
|
307 |
-
<td class="tg-0lax" rowspan="5">VocalSound</td>
|
308 |
-
<td class="tg-0lax">CLAP</td>
|
309 |
-
<td class="tg-0lax">0.495</td>
|
310 |
-
</tr>
|
311 |
-
<tr>
|
312 |
-
<td class="tg-0lax">Pengi</td>
|
313 |
-
<td class="tg-0lax">0.604</td>
|
314 |
-
</tr>
|
315 |
-
<tr>
|
316 |
-
<td class="tg-0lax">Qwen-Audio</td>
|
317 |
-
<td class="tg-0lax">0.929</td>
|
318 |
-
</tr>
|
319 |
-
<tr>
|
320 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
321 |
-
<td class="tg-0lax"><strong>0.939</strong></td>
|
322 |
-
</tr>
|
323 |
-
<tr>
|
324 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
325 |
-
<td class="tg-0lax"><strong>0.939</strong></td>
|
326 |
-
</tr>
|
327 |
-
<tr>
|
328 |
-
<td class="tg-9j4x" colspan="3">Music</td>
|
329 |
-
</tr>
|
330 |
-
<tr>
|
331 |
-
<td class="tg-0lax" rowspan="2">GiantSteps Tempo</td>
|
332 |
-
<td class="tg-0lax">Llark-7B</td>
|
333 |
-
<td class="tg-0lax">0.86</td>
|
334 |
-
</tr>
|
335 |
-
<tr>
|
336 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
337 |
-
<td class="tg-0lax"><strong>0.88</strong></td>
|
338 |
-
</tr>
|
339 |
-
<tr>
|
340 |
-
<td class="tg-0lax" rowspan="2">MusicCaps</td>
|
341 |
-
<td class="tg-0lax">LP-MusicCaps</td>
|
342 |
-
<td class="tg-0lax">0.291|0.149|0.089|<strong>0.061</strong>|<strong>0.129</strong>|0.130</td>
|
343 |
-
</tr>
|
344 |
-
<tr>
|
345 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
346 |
-
<td class="tg-0lax"><strong>0.328</strong>|<strong>0.162</strong>|<strong>0.090</strong>|0.055|0.127|<strong>0.225</strong></td>
|
347 |
-
</tr>
|
348 |
-
<tr>
|
349 |
-
<td class="tg-9j4x" colspan="3">Audio Reasoning</td>
|
350 |
-
</tr>
|
351 |
-
<tr>
|
352 |
-
<td class="tg-0lax" rowspan="3">MMAU<br>Sound | Music | Speech | Avg</td>
|
353 |
-
<td class="tg-0lax">Gemini-Pro-V1.5</td>
|
354 |
-
<td class="tg-0lax">56.75|49.40|58.55|54.90</td>
|
355 |
-
</tr>
|
356 |
-
<tr>
|
357 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
358 |
-
<td class="tg-0lax">54.95|50.98|42.04|49.20</td>
|
359 |
-
</tr>
|
360 |
-
<tr>
|
361 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
362 |
-
<td class="tg-0lax"><strong>67.87|69.16|59.76|65.60</strong></td>
|
363 |
-
</tr>
|
364 |
-
<tr>
|
365 |
-
<td class="tg-9j4x" colspan="3">Voice Chatting</td>
|
366 |
-
</tr>
|
367 |
-
<tr>
|
368 |
-
<td class="tg-0lax" rowspan="8">VoiceBench<br>AlpacaEval | CommonEval | SD-QA | MMSU</td>
|
369 |
-
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
|
370 |
-
<td class="tg-0lax"><strong>4.55</strong>|3.90|53.35|47.17</td>
|
371 |
-
</tr>
|
372 |
-
<tr>
|
373 |
-
<td class="tg-0lax">MERaLiON</td>
|
374 |
-
<td class="tg-0lax">4.50|3.77|55.06|34.95</td>
|
375 |
-
</tr>
|
376 |
-
<tr>
|
377 |
-
<td class="tg-0lax">Megrez-3B-Omni</td>
|
378 |
-
<td class="tg-0lax">3.50|2.95|25.95|27.03</td>
|
379 |
-
</tr>
|
380 |
-
<tr>
|
381 |
-
<td class="tg-0lax">Lyra-Base</td>
|
382 |
-
<td class="tg-0lax">3.85|3.50|38.25|49.74</td>
|
383 |
-
</tr>
|
384 |
-
<tr>
|
385 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
386 |
-
<td class="tg-0lax">4.42|<strong>4.15</strong>|50.72|54.78</td>
|
387 |
-
</tr>
|
388 |
-
<tr>
|
389 |
-
<td class="tg-0lax">Baichuan-Omni-1.5</td>
|
390 |
-
<td class="tg-0lax">4.50|4.05|43.40|57.25</td>
|
391 |
-
</tr>
|
392 |
-
<tr>
|
393 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
394 |
-
<td class="tg-0lax">3.74|3.43|35.71|35.72</td>
|
395 |
-
</tr>
|
396 |
-
<tr>
|
397 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
398 |
-
<td class="tg-0lax">4.49|3.93|<strong>55.71</strong>|<strong>61.32</strong></td>
|
399 |
-
</tr>
|
400 |
-
<tr>
|
401 |
-
<td class="tg-0lax" rowspan="8">VoiceBench<br>OpenBookQA | IFEval | AdvBench | Avg</td>
|
402 |
-
<td class="tg-0lax">Ultravox-v0.4.1-LLaMA-3.1-8B</td>
|
403 |
-
<td class="tg-0lax">65.27|<strong>66.88</strong>|98.46|71.45</td>
|
404 |
-
</tr>
|
405 |
-
<tr>
|
406 |
-
<td class="tg-0lax">MERaLiON</td>
|
407 |
-
<td class="tg-0lax">27.23|62.93|94.81|62.91</td>
|
408 |
-
</tr>
|
409 |
-
<tr>
|
410 |
-
<td class="tg-0lax">Megrez-3B-Omni</td>
|
411 |
-
<td class="tg-0lax">28.35|25.71|87.69|46.25</td>
|
412 |
-
</tr>
|
413 |
-
<tr>
|
414 |
-
<td class="tg-0lax">Lyra-Base</td>
|
415 |
-
<td class="tg-0lax">72.75|36.28|59.62|57.66</td>
|
416 |
-
</tr>
|
417 |
-
<tr>
|
418 |
-
<td class="tg-0lax">MiniCPM-o</td>
|
419 |
-
<td class="tg-0lax">78.02|49.25|97.69|71.69</td>
|
420 |
-
</tr>
|
421 |
-
<tr>
|
422 |
-
<td class="tg-0lax">Baichuan-Omni-1.5</td>
|
423 |
-
<td class="tg-0lax">74.51|54.54|97.31|71.14</td>
|
424 |
-
</tr>
|
425 |
-
<tr>
|
426 |
-
<td class="tg-0lax">Qwen2-Audio</td>
|
427 |
-
<td class="tg-0lax">49.45|26.33|96.73|55.35</td>
|
428 |
-
</tr>
|
429 |
-
<tr>
|
430 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B</td>
|
431 |
-
<td class="tg-0lax"><strong>81.10</strong>|52.87|<strong>99.42</strong>|<strong>74.12</strong></td>
|
432 |
-
</tr>
|
433 |
-
</tbody></table>
|
434 |
-
</details>
|
435 |
-
|
436 |
-
<details>
|
437 |
-
<summary>Image -> Text</summary>
|
438 |
-
|
439 |
-
| Dataset | Qwen2.5-Omni-7B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|
440 |
-
|--------------------------------|--------------|------------|---------------|-------------|
|
441 |
-
| MMMU<sub>val</sub> | 59.2 | 53.9 | 58.6 | **60.0** |
|
442 |
-
| MMMU-Pro<sub>overall</sub> | 36.6 | - | **38.3** | 37.6 |
|
443 |
-
| MathVista<sub>testmini</sub> | 67.9 | **71.9** | 68.2 | 52.5 |
|
444 |
-
| MathVision<sub>full</sub> | 25.0 | 23.1 | **25.1** | - |
|
445 |
-
| MMBench-V1.1-EN<sub>test</sub> | 81.8 | 80.5 | **82.6** | 76.0 |
|
446 |
-
| MMVet<sub>turbo</sub> | 66.8 | **67.5** | 67.1 | 66.9 |
|
447 |
-
| MMStar | **64.0** | **64.0** | 63.9 | 54.8 |
|
448 |
-
| MME<sub>sum</sub> | 2340 | **2372** | 2347 | 2003 |
|
449 |
-
| MuirBench | 59.2 | - | **59.2** | - |
|
450 |
-
| CRPE<sub>relation</sub> | **76.5** | - | 76.4 | - |
|
451 |
-
| RealWorldQA<sub>avg</sub> | 70.3 | **71.9** | 68.5 | - |
|
452 |
-
| MME-RealWorld<sub>en</sub> | **61.6** | - | 57.4 | - |
|
453 |
-
| MM-MT-Bench | 6.0 | - | **6.3** | - |
|
454 |
-
| AI2D | 83.2 | **85.8** | 83.9 | - |
|
455 |
-
| TextVQA<sub>val</sub> | 84.4 | 83.2 | **84.9** | - |
|
456 |
-
| DocVQA<sub>test</sub> | 95.2 | 93.5 | **95.7** | - |
|
457 |
-
| ChartQA<sub>test Avg</sub> | 85.3 | 84.9 | **87.3** | - |
|
458 |
-
| OCRBench_V2<sub>en</sub> | **57.8** | - | 56.3 | - |
|
459 |
-
|
460 |
-
|
461 |
-
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-VL-7B | Grounding DINO | Gemini 1.5 Pro |
|
462 |
-
|--------------------------|--------------|---------------|----------------|----------------|
|
463 |
-
| Refcoco<sub>val</sub> | 90.5 | 90.0 | **90.6** | 73.2 |
|
464 |
-
| Refcoco<sub>textA</sub> | **93.5** | 92.5 | 93.2 | 72.9 |
|
465 |
-
| Refcoco<sub>textB</sub> | 86.6 | 85.4 | **88.2** | 74.6 |
|
466 |
-
| Refcoco+<sub>val</sub> | 85.4 | 84.2 | **88.2** | 62.5 |
|
467 |
-
| Refcoco+<sub>textA</sub> | **91.0** | 89.1 | 89.0 | 63.9 |
|
468 |
-
| Refcoco+<sub>textB</sub> | **79.3** | 76.9 | 75.9 | 65.0 |
|
469 |
-
| Refcocog+<sub>val</sub> | **87.4** | 87.2 | 86.1 | 75.2 |
|
470 |
-
| Refcocog+<sub>test</sub> | **87.9** | 87.2 | 87.0 | 76.2 |
|
471 |
-
| ODinW | 42.4 | 37.3 | **55.0** | 36.7 |
|
472 |
-
| PointGrounding | 66.5 | **67.3** | - | - |
|
473 |
-
</details>
|
474 |
-
|
475 |
-
|
476 |
-
<details>
|
477 |
-
<summary>Video(without audio) -> Text</summary>
|
478 |
-
|
479 |
-
| Dataset | Qwen2.5-Omni-7B | Other Best | Qwen2.5-VL-7B | GPT-4o-mini |
|
480 |
-
|-----------------------------|--------------|------------|---------------|-------------|
|
481 |
-
| Video-MME<sub>w/o sub</sub> | 64.3 | 63.9 | **65.1** | 64.8 |
|
482 |
-
| Video-MME<sub>w sub</sub> | **72.4** | 67.9 | 71.6 | - |
|
483 |
-
| MVBench | **70.3** | 67.2 | 69.6 | - |
|
484 |
-
| EgoSchema<sub>test</sub> | **68.6** | 63.2 | 65.0 | - |
|
485 |
-
</details>
|
486 |
-
|
487 |
-
<details>
|
488 |
-
<summary>Zero-shot Speech Generation</summary>
|
489 |
-
|
490 |
-
|
491 |
-
<table class="tg"><thead>
|
492 |
-
<tr>
|
493 |
-
<th class="tg-0lax">Datasets</th>
|
494 |
-
<th class="tg-0lax">Model</th>
|
495 |
-
<th class="tg-0lax">Performance</th>
|
496 |
-
</tr></thead>
|
497 |
-
<tbody>
|
498 |
-
<tr>
|
499 |
-
<td class="tg-9j4x" colspan="3">Content Consistency</td>
|
500 |
-
</tr>
|
501 |
-
<tr>
|
502 |
-
<td class="tg-0lax" rowspan="9">SEED<br>test-zh | test-en | test-hard </td>
|
503 |
-
<td class="tg-0lax">Seed-TTS_ICL</td>
|
504 |
-
<td class="tg-0lax">1.11 | 2.24 | 7.58</td>
|
505 |
-
</tr>
|
506 |
-
<tr>
|
507 |
-
<td class="tg-0lax">Seed-TTS_RL</td>
|
508 |
-
<td class="tg-0lax"><strong>1.00</strong> | 1.94 | <strong>6.42</strong></td>
|
509 |
-
</tr>
|
510 |
-
<tr>
|
511 |
-
<td class="tg-0lax">MaskGCT</td>
|
512 |
-
<td class="tg-0lax">2.27 | 2.62 | 10.27</td>
|
513 |
-
</tr>
|
514 |
-
<tr>
|
515 |
-
<td class="tg-0lax">E2_TTS</td>
|
516 |
-
<td class="tg-0lax">1.97 | 2.19 | -</td>
|
517 |
-
</tr>
|
518 |
-
<tr>
|
519 |
-
<td class="tg-0lax">F5-TTS</td>
|
520 |
-
<td class="tg-0lax">1.56 | <strong>1.83</strong> | 8.67</td>
|
521 |
-
</tr>
|
522 |
-
<tr>
|
523 |
-
<td class="tg-0lax">CosyVoice 2</td>
|
524 |
-
<td class="tg-0lax">1.45 | 2.57 | 6.83</td>
|
525 |
-
</tr>
|
526 |
-
<tr>
|
527 |
-
<td class="tg-0lax">CosyVoice 2-S</td>
|
528 |
-
<td class="tg-0lax">1.45 | 2.38 | 8.08</td>
|
529 |
-
</tr>
|
530 |
-
<tr>
|
531 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
|
532 |
-
<td class="tg-0lax">1.70 | 2.72 | 7.97</td>
|
533 |
-
</tr>
|
534 |
-
<tr>
|
535 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
|
536 |
-
<td class="tg-0lax">1.42 | 2.32 | 6.54</td>
|
537 |
-
</tr>
|
538 |
-
<tr>
|
539 |
-
<td class="tg-9j4x" colspan="3">Speaker Similarity</td>
|
540 |
-
</tr>
|
541 |
-
<tr>
|
542 |
-
<td class="tg-0lax" rowspan="9">SEED<br>test-zh | test-en | test-hard </td>
|
543 |
-
<td class="tg-0lax">Seed-TTS_ICL</td>
|
544 |
-
<td class="tg-0lax">0.796 | 0.762 | 0.776</td>
|
545 |
-
</tr>
|
546 |
-
<tr>
|
547 |
-
<td class="tg-0lax">Seed-TTS_RL</td>
|
548 |
-
<td class="tg-0lax"><strong>0.801</strong> | <strong>0.766</strong> | <strong>0.782</strong></td>
|
549 |
-
</tr>
|
550 |
-
<tr>
|
551 |
-
<td class="tg-0lax">MaskGCT</td>
|
552 |
-
<td class="tg-0lax">0.774 | 0.714 | 0.748</td>
|
553 |
-
</tr>
|
554 |
-
<tr>
|
555 |
-
<td class="tg-0lax">E2_TTS</td>
|
556 |
-
<td class="tg-0lax">0.730 | 0.710 | -</td>
|
557 |
-
</tr>
|
558 |
-
<tr>
|
559 |
-
<td class="tg-0lax">F5-TTS</td>
|
560 |
-
<td class="tg-0lax">0.741 | 0.647 | 0.713</td>
|
561 |
-
</tr>
|
562 |
-
<tr>
|
563 |
-
<td class="tg-0lax">CosyVoice 2</td>
|
564 |
-
<td class="tg-0lax">0.748 | 0.652 | 0.724</td>
|
565 |
-
</tr>
|
566 |
-
<tr>
|
567 |
-
<td class="tg-0lax">CosyVoice 2-S</td>
|
568 |
-
<td class="tg-0lax">0.753 | 0.654 | 0.732</td>
|
569 |
-
</tr>
|
570 |
-
<tr>
|
571 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B_ICL</td>
|
572 |
-
<td class="tg-0lax">0.752 | 0.632 | 0.747</td>
|
573 |
-
</tr>
|
574 |
-
<tr>
|
575 |
-
<td class="tg-0lax">Qwen2.5-Omni-7B_RL</td>
|
576 |
-
<td class="tg-0lax">0.754 | 0.641 | 0.752</td>
|
577 |
-
</tr>
|
578 |
-
</tbody></table>
|
579 |
-
</details>
|
580 |
-
|
581 |
-
<details>
|
582 |
-
<summary>Text -> Text</summary>
|
583 |
-
|
584 |
-
| Dataset | Qwen2.5-Omni-7B | Qwen2.5-7B | Qwen2-7B | Llama3.1-8B | Gemma2-9B |
|
585 |
-
|-----------------------------------|-----------|------------|----------|-------------|-----------|
|
586 |
-
| MMLU-Pro | 47.0 | **56.3** | 44.1 | 48.3 | 52.1 |
|
587 |
-
| MMLU-redux | 71.0 | **75.4** | 67.3 | 67.2 | 72.8 |
|
588 |
-
| LiveBench<sub>0831</sub> | 29.6 | **35.9** | 29.2 | 26.7 | 30.6 |
|
589 |
-
| GPQA | 30.8 | **36.4** | 34.3 | 32.8 | 32.8 |
|
590 |
-
| MATH | 71.5 | **75.5** | 52.9 | 51.9 | 44.3 |
|
591 |
-
| GSM8K | 88.7 | **91.6** | 85.7 | 84.5 | 76.7 |
|
592 |
-
| HumanEval | 78.7 | **84.8** | 79.9 | 72.6 | 68.9 |
|
593 |
-
| MBPP | 73.2 | **79.2** | 67.2 | 69.6 | 74.9 |
|
594 |
-
| MultiPL-E | 65.8 | **70.4** | 59.1 | 50.7 | 53.4 |
|
595 |
-
| LiveCodeBench<sub>2305-2409</sub> | 24.6 | **28.7** | 23.9 | 8.3 | 18.9 |
|
596 |
-
</details>
|
597 |
-
|
598 |
-
## Quickstart
|
599 |
-
|
600 |
-
Below, we provide simple examples to show how to use Qwen2.5-Omni with 🤗 Transformers. The codes of Qwen2.5-Omni on Hugging Face Transformers are in pull request stage and not merged into the main branch yet. Therefore, you may need to build from source to use it with command:
|
601 |
-
```
|
602 |
-
pip uninstall transformers
|
603 |
-
pip install git+https://github.com/huggingface/transformers@3a1ead0aabed473eafe527915eea8c197d424356
|
604 |
-
pip install accelerate
|
605 |
-
```
|
606 |
-
or you might encounter the following error:
|
607 |
-
```
|
608 |
-
KeyError: 'qwen2_5_omni'
|
609 |
-
```
|
610 |
-
|
611 |
-
|
612 |
-
We offer a toolkit to help you handle various types of audio and visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved audio, images and videos. You can install it using the following command and make sure your system has `ffmpeg` installed:
|
613 |
-
|
614 |
-
```bash
|
615 |
-
# It's highly recommended to use `[decord]` feature for faster video loading.
|
616 |
-
pip install qwen-omni-utils[decord]
|
617 |
-
```
|
618 |
-
|
619 |
-
If you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-omni-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.
|
620 |
-
|
621 |
-
### 🤗 Transformers Usage
|
622 |
-
|
623 |
-
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_omni_utils`:
|
624 |
-
|
625 |
-
```python
|
626 |
-
import soundfile as sf
|
627 |
-
|
628 |
-
from transformers import Qwen2_5OmniModel, Qwen2_5OmniProcessor
|
629 |
-
from qwen_omni_utils import process_mm_info
|
630 |
-
|
631 |
-
# default: Load the model on the available device(s)
|
632 |
-
model = Qwen2_5OmniModel.from_pretrained("Qwen/Qwen2.5-Omni-7B", torch_dtype="auto", device_map="auto")
|
633 |
-
|
634 |
-
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
|
635 |
-
# model = Qwen2_5OmniModel.from_pretrained(
|
636 |
-
# "Qwen/Qwen2.5-Omni-7B",
|
637 |
-
# torch_dtype="auto",
|
638 |
-
# device_map="auto",
|
639 |
-
# attn_implementation="flash_attention_2",
|
640 |
-
# )
|
641 |
-
|
642 |
-
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
|
643 |
-
|
644 |
-
conversation = [
|
645 |
-
{
|
646 |
-
"role": "system",
|
647 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
648 |
-
},
|
649 |
-
{
|
650 |
-
"role": "user",
|
651 |
-
"content": [
|
652 |
-
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
|
653 |
-
],
|
654 |
-
},
|
655 |
-
]
|
656 |
-
|
657 |
-
# Preparation for inference
|
658 |
-
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
|
659 |
-
audios, images, videos = process_mm_info(conversation, use_audio_in_video=True)
|
660 |
-
inputs = processor(text=text, audios=audios, images=images, videos=videos, return_tensors="pt", padding=True)
|
661 |
-
inputs = inputs.to(model.device).to(model.dtype)
|
662 |
-
|
663 |
-
# Inference: Generation of the output text and audio
|
664 |
-
text_ids, audio = model.generate(**inputs, use_audio_in_video=True)
|
665 |
-
|
666 |
-
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
667 |
-
print(text)
|
668 |
-
sf.write(
|
669 |
-
"output.wav",
|
670 |
-
audio.reshape(-1).detach().cpu().numpy(),
|
671 |
-
samplerate=24000,
|
672 |
-
)
|
673 |
-
```
|
674 |
-
|
675 |
-
<details>
|
676 |
-
<summary>Minimum GPU memory requirements</summary>
|
677 |
-
|
678 |
-
| Precision | 15(s) Video | 30(s) Video | 60(s) Video |
|
679 |
-
|-----------| ------------- | --------- | -------------- |
|
680 |
-
| FP32 | 93.56 GB | Not Recommend | Not Recommend |
|
681 |
-
| BF16 | 31.11 GB | 41.85 GB | 60.19 GB |
|
682 |
-
|
683 |
-
Note: The table above presents the theoretical minimum memory requirements for inference with `transformers` and `BF16` is test with `attn_implementation="flash_attention_2"`; however, in practice, the actual memory usage is typically at least 1.2 times higher. For more information, see the linked resource [here](https://huggingface.co/docs/accelerate/main/en/usage_guides/model_size_estimator).
|
684 |
-
</details>
|
685 |
-
|
686 |
-
<details>
|
687 |
-
<summary>Video ULR resource usage</summary>
|
688 |
-
|
689 |
-
Video URL compatibility largely depends on the third-party library version. The details are in the table below. Change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.
|
690 |
-
|
691 |
-
| Backend | HTTP | HTTPS |
|
692 |
-
|-------------|------|-------|
|
693 |
-
| torchvision >= 0.19.0 | ✅ | ✅ |
|
694 |
-
| torchvision < 0.19.0 | ❌ | ❌ |
|
695 |
-
| decord | ✅ | ❌ |
|
696 |
-
</details>
|
697 |
-
|
698 |
-
<details>
|
699 |
-
<summary>Batch inference</summary>
|
700 |
-
|
701 |
-
The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when `return_audio=False` is set. Here is an example.
|
702 |
-
|
703 |
-
```python
|
704 |
-
# Sample messages for batch inference
|
705 |
-
|
706 |
-
# Conversation with video only
|
707 |
-
conversation1 = [
|
708 |
-
{
|
709 |
-
"role": "system",
|
710 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
711 |
-
},
|
712 |
-
{
|
713 |
-
"role": "user",
|
714 |
-
"content": [
|
715 |
-
{"type": "video", "video": "/path/to/video.mp4"},
|
716 |
-
]
|
717 |
-
}
|
718 |
-
]
|
719 |
-
|
720 |
-
# Conversation with audio only
|
721 |
-
conversation2 = [
|
722 |
-
{
|
723 |
-
"role": "system",
|
724 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
725 |
-
},
|
726 |
-
{
|
727 |
-
"role": "user",
|
728 |
-
"content": [
|
729 |
-
{"type": "audio", "audio": "/path/to/audio.wav"},
|
730 |
-
]
|
731 |
-
}
|
732 |
-
]
|
733 |
-
|
734 |
-
# Conversation with pure text
|
735 |
-
conversation3 = [
|
736 |
-
{
|
737 |
-
"role": "system",
|
738 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
739 |
-
},
|
740 |
-
{
|
741 |
-
"role": "user",
|
742 |
-
"content": "who are you?"
|
743 |
-
}
|
744 |
-
]
|
745 |
-
|
746 |
-
|
747 |
-
# Conversation with mixed media
|
748 |
-
conversation4 = [
|
749 |
-
{
|
750 |
-
"role": "system",
|
751 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
752 |
-
},
|
753 |
-
{
|
754 |
-
"role": "user",
|
755 |
-
"content": [
|
756 |
-
{"type": "image", "image": "/path/to/image.jpg"},
|
757 |
-
{"type": "video", "video": "/path/to/video.mp4"},
|
758 |
-
{"type": "audio", "audio": "/path/to/audio.wav"},
|
759 |
-
{"type": "text", "text": "What are the elements can you see and hear in these medias?"},
|
760 |
-
],
|
761 |
-
}
|
762 |
-
]
|
763 |
-
|
764 |
-
# Combine messages for batch processing
|
765 |
-
conversations = [conversation1, conversation2, conversation3, conversation4]
|
766 |
-
|
767 |
-
# Preparation for batch inference
|
768 |
-
text = processor.apply_chat_template(conversations, add_generation_prompt=True, tokenize=False)
|
769 |
-
audios, images, videos = process_mm_info(conversations, use_audio_in_video=True)
|
770 |
-
|
771 |
-
inputs = processor(text=text, audios=audios, images=images, videos=videos, return_tensors="pt", padding=True)
|
772 |
-
inputs = inputs.to(model.device).to(model.dtype)
|
773 |
-
|
774 |
-
# Batch Inference
|
775 |
-
text_ids = model.generate(**inputs, use_audio_in_video=True, return_audio=False)
|
776 |
-
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
777 |
-
print(text)
|
778 |
-
```
|
779 |
-
</details>
|
780 |
-
|
781 |
-
### Usage Tips
|
782 |
-
|
783 |
-
#### Prompt for audio output
|
784 |
-
If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected.
|
785 |
-
```
|
786 |
-
{
|
787 |
-
"role": "system",
|
788 |
-
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
|
789 |
-
}
|
790 |
-
```
|
791 |
-
#### Use audio in video
|
792 |
-
In the process of multimodal interaction, the videos provided by users are often accompanied by audio (such as questions about the content in the video, or sounds generated by certain events in the video). This information is conducive to the model providing a better interactive experience. So we provide the following options for users to decide whether to use audio in video.
|
793 |
-
```python
|
794 |
-
# first place, in data preprocessing
|
795 |
-
audios, images, videos = process_mm_info(conversations, use_audio_in_video=True)
|
796 |
-
```
|
797 |
-
```python
|
798 |
-
# second place, in model inference
|
799 |
-
text_ids, audio = model.generate(**inputs, use_audio_in_video=True)
|
800 |
-
```
|
801 |
-
It is worth noting that during a multi-round conversation, the `use_audio_in_video` parameter in these two places must be set to the same, otherwise unexpected results will occur.
|
802 |
-
|
803 |
-
#### Use audio output or not
|
804 |
-
|
805 |
-
The model supports both text and audio outputs, if users do not need audio outputs, they can set `enable_audio_output=False` in the `from_pretrained` function. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.
|
806 |
-
```python
|
807 |
-
model = Qwen2_5OmniModel.from_pretrained(
|
808 |
-
"Qwen/Qwen2.5-Omni-7B",
|
809 |
-
torch_dtype="auto",
|
810 |
-
device_map="auto",
|
811 |
-
enable_audio_output=False,
|
812 |
-
)
|
813 |
-
```
|
814 |
-
|
815 |
-
In order to obtain a flexible experience, we recommend that users set `enable_audio_output` at `True` when initializing the model through `from_pretrained` function, and then decide whether to return audio when `generate` function is called. When `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.
|
816 |
-
|
817 |
-
```python
|
818 |
-
model = Qwen2_5OmniModel.from_pretrained(
|
819 |
-
"Qwen/Qwen2.5-Omni-7B",
|
820 |
-
torch_dtype="auto",
|
821 |
-
device_map="auto",
|
822 |
-
enable_audio_output=True,
|
823 |
-
)
|
824 |
-
...
|
825 |
-
text_ids = model.generate(**inputs, return_audio=False)
|
826 |
-
```
|
827 |
-
|
828 |
-
#### Change voice type of output audio
|
829 |
-
Qwen2.5-Omni supports the ability to change the voice of the output audio. The `"Qwen/Qwen2.5-Omni-7B"` checkpoint support two voice types as follow:
|
830 |
-
|
831 |
-
| Voice Type | Gender | Description |
|
832 |
-
|------------|--------|-------------|
|
833 |
-
| Chelsie | Female | A honeyed, velvety voice that carries a gentle warmth and luminous clarity.|
|
834 |
-
| Ethan | Male | A bright, upbeat voice with infectious energy and a warm, approachable vibe.|
|
835 |
-
|
836 |
-
Users can use the `spk` parameter of `generate` function to specify the voice type. By defalut, if `spk` is not specified, the default voice type is `Chelsie`.
|
837 |
-
|
838 |
-
```python
|
839 |
-
text_ids, audio = model.generate(**inputs, spk="Chelsie")
|
840 |
-
```
|
841 |
-
|
842 |
-
```python
|
843 |
-
text_ids, audio = model.generate(**inputs, spk="Ethan")
|
844 |
-
```
|
845 |
-
|
846 |
-
#### Flash-Attention 2 to speed up generation
|
847 |
-
|
848 |
-
First, make sure to install the latest version of Flash Attention 2:
|
849 |
-
|
850 |
-
```bash
|
851 |
-
pip install -U flash-attn --no-build-isolation
|
852 |
-
```
|
853 |
-
|
854 |
-
Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
|
855 |
-
|
856 |
-
To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model:
|
857 |
-
|
858 |
-
```python
|
859 |
-
from transformers import Qwen2_5OmniModel
|
860 |
-
|
861 |
-
model = Qwen2_5OmniModel.from_pretrained(
|
862 |
-
"Qwen/Qwen2.5-Omni-7B",
|
863 |
-
device_map="auto",
|
864 |
-
torch_dtype=torch.bfloat16,
|
865 |
-
attn_implementation="flash_attention_2",
|
866 |
-
)
|
867 |
-
```
|
868 |
-
|
869 |
-
|
870 |
-
<!-- ## Citation
|
871 |
-
|
872 |
-
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
|
873 |
-
|
874 |
-
|
875 |
-
|
876 |
-
```BibTeX
|
877 |
-
|
878 |
-
@article{Qwen2.5-Omni,
|
879 |
-
title={Qwen2.5-Omni Technical Report},
|
880 |
-
author={},
|
881 |
-
journal={arXiv preprint arXiv:},
|
882 |
-
year={2025}
|
883 |
-
}
|
884 |
-
``` -->
|
885 |
-
|
886 |
-
<br>
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
library_name: transformers
|
5 |
license: other
|
6 |
license_name: apache-2.0
|
7 |
license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-7B/blob/main/LICENSE
|
8 |
+
pipeline_tag: any-to-any
|
|
|
9 |
tags:
|
10 |
- multimodal
|
|
|
|
|
11 |
---
|
12 |
|
13 |
# Qwen2.5-Omni
|
14 |
+
|
15 |
+
[](https://chat.qwenlm.ai/)
|
16 |
+
[](https://qwenlm.github.io/blog/qwen2.5-omni/)
|
17 |
+
This model is presented in the paper [Qwen2.5-Omni Technical Report](https://huggingface.co/papers/2503.20215). The code is available on Github: [Qwen2.5-Omni Github](https://github.com/QwenLM/Qwen2.5-Omni/).
|
18 |
|
19 |
|
20 |
## OverView
|
|
|
27 |
|
28 |
### Key Features
|
29 |
|
30 |
+
* **Omni and Novel Architecture**: We propose Thinker-Talker architecture, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. We propose a novel position embedding, named TMRoPE (Time-aligned Multimodal RoPE), to synchronize the timestamps of video inputs with audio.
|
31 |
|
32 |
+
* **Real-Time Voice and Video Chat**: Architecture designed for fully real-time interactions, supporting chunked input and immediate output.
|
33 |
|
34 |
* **Natural and Robust Speech Generation**: Surpassing many existing streaming and non-streaming alternatives, demonstrating superior robustness and naturalness in speech generation.
|
35 |
|
|
|
53 |
|
54 |
<details>
|
55 |
<summary>Multimodality -> Text</summary>
|
56 |
+
... (rest of the content remains the same) ...
|
57 |
+
</details>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|