ubergarm commited on
Commit
f595d70
Β·
1 Parent(s): 5402d3c

setting up repo

Browse files
Files changed (2) hide show
  1. .gitattributes +3 -0
  2. README.md +503 -3
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ imatrix-*.dat filter=lfs diff=lfs merge=lfs -text
37
+ *.gguf filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,503 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: zai-org/GLM-4.6
5
+ license: mit
6
+ base_model_relation: quantized
7
+ tags:
8
+ - imatrix
9
+ - conversational
10
+ - ik_llama.cpp
11
+ ---
12
+
13
+ ## WIP
14
+
15
+ - [ ] download bf16 safetensors
16
+ - [ ] run `convert_hf_to_gguf.py` to get bf16 GGUF
17
+ - [ ] calculate imatrix from above
18
+ - [ ] use imatrix to generate quants
19
+ - [ ] listed recipes might change (open discussion for special requests)
20
+
21
+ ## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-4.6
22
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
23
+
24
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
25
+
26
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for [Windows builds by Thireus here.](https://github.com/Thireus/ik_llama.cpp/releases) which have been CUDA 12.8.
27
+
28
+ These quants provide best in class perplexity for the given memory footprint.
29
+
30
+ ## Big Thanks
31
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
32
+
33
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
34
+
35
+ ## Quant Collection
36
+ Perplexity computed against *wiki.test.raw*.
37
+
38
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
39
+
40
+ These first two are just test quants for baseline perplexity comparison:
41
+ * `BF16` TODO
42
+ - Final estimate: PPL = TODO
43
+ * `Q8_0` TODO
44
+ - Final estimate: PPL = TODO
45
+
46
+ ## IQ5_K TODO
47
+ Final estimate: PPL = TODO
48
+
49
+ <details>
50
+
51
+ <summary>πŸ‘ˆ Secret Recipe</summary>
52
+
53
+ ```bash
54
+ #/usr/bin/env bash
55
+
56
+ custom="
57
+ # 93 Repeating Layers [0-92]
58
+
59
+ # Attention
60
+ blk\..*\.attn_q.*=q8_0
61
+ blk\..*\.attn_k.*=q8_0
62
+ blk\..*\.attn_v.*=q8_0
63
+ blk\..*\.attn_output.*=q8_0
64
+
65
+ # First 3 Dense Layers [0-2]
66
+ blk\..*\.ffn_down\.weight=q8_0
67
+ blk\..*\.ffn_(gate|up)\.weight=q8_0
68
+
69
+ # Shared Expert Layers [3-92]
70
+ blk\..*\.ffn_down_shexp\.weight=q8_0
71
+ blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
72
+
73
+ # Routed Experts Layers [3-92]
74
+ blk\..*\.ffn_down_exps\.weight=iq6_k
75
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
76
+
77
+ # NextN MTP Layer [92]
78
+ blk\..*\.nextn\.embed_tokens\.weight=iq6_k
79
+ blk\..*\.nextn\.shared_head_head\.weight=iq6_k
80
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
81
+
82
+ # Non-Repeating Layers
83
+ token_embd\.weight=iq6_k
84
+ output\.weight=iq6_k
85
+ "
86
+
87
+ custom=$(
88
+ echo "$custom" | grep -v '^#' | \
89
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
90
+ )
91
+
92
+ numactl -N 0 -m 0 \
93
+ ./build/bin/llama-quantize \
94
+ --custom-q "$custom" \
95
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
96
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
97
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ5_K.gguf \
98
+ IQ5_K \
99
+ 192
100
+ ```
101
+
102
+ </details>
103
+
104
+ ## IQ4_K TODO
105
+ Final estimate: PPL = TODO
106
+
107
+ <details>
108
+
109
+ <summary>πŸ‘ˆ Secret Recipe</summary>
110
+
111
+ ```bash
112
+ #/usr/bin/env bash
113
+ custom="
114
+ # 93 Repeating Layers [0-92]
115
+
116
+ # Attention
117
+ blk\..*\.attn_q.*=iq6_k
118
+ blk\..*\.attn_k.*=q8_0
119
+ blk\..*\.attn_v.*=q8_0
120
+ blk\..*\.attn_output.*=iq6_k
121
+
122
+ # First 3 Dense Layers [0-2]
123
+ blk\..*\.ffn_down\.weight=q8_0
124
+ blk\..*\.ffn_(gate|up)\.weight=iq6_k
125
+
126
+ # Shared Expert Layers [3-92]
127
+ blk\..*\.ffn_down_shexp\.weight=q8_0
128
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k
129
+
130
+ # Routed Experts Layers [3-92]
131
+ blk\..*\.ffn_down_exps\.weight=iq5_k
132
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq4_k
133
+
134
+ # NextN MTP Layer [92]
135
+ blk\..*\.nextn\.embed_tokens\.weight=iq5_k
136
+ blk\..*\.nextn\.shared_head_head\.weight=iq5_k
137
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
138
+
139
+ # Non-Repeating Layers
140
+ token_embd\.weight=iq4_k
141
+ output\.weight=iq6_k
142
+ "
143
+
144
+ custom=$(
145
+ echo "$custom" | grep -v '^#' | \
146
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
147
+ )
148
+
149
+ numactl -N 0 -m 0 \
150
+ ./build/bin/llama-quantize \
151
+ --custom-q "$custom" \
152
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
153
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
154
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ4_K.gguf \
155
+ IQ4_K \
156
+ 192
157
+ ```
158
+
159
+ </details>
160
+
161
+
162
+ ## IQ4_KSS TODO
163
+ Final estimate: PPL = TODO
164
+
165
+ <details>
166
+
167
+ <summary>πŸ‘ˆ Secret Recipe</summary>
168
+
169
+ ```bash
170
+ #/usr/bin/env bash
171
+
172
+ custom="
173
+ # 93 Repeating Layers [0-92]
174
+
175
+ # Attention
176
+ blk\.(0|1|2)\.attn_q.*=q8_0
177
+ blk\.(0|1|2)\.attn_k.*=q8_0
178
+ blk\.(0|1|2)\.attn_v.*=q8_0
179
+ blk\.(0|1|2)\.attn_output.*=q8_0
180
+
181
+ blk\..*\.attn_q.*=iq5_ks
182
+ blk\..*\.attn_k.*=iq6_k
183
+ blk\..*\.attn_v.*=iq6_k
184
+ blk\..*\.attn_output.*=iq5_ks
185
+
186
+ # First 3 Dense Layers [0-2]
187
+ blk\..*\.ffn_down\.weight=iq5_ks
188
+ blk\..*\.ffn_(gate|up)\.weight=iq4_ks
189
+
190
+ # Shared Expert Layers [3-92]
191
+ blk\..*\.ffn_down_shexp\.weight=iq5_ks
192
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks
193
+
194
+ # Routed Experts Layers [3-92]
195
+ blk\..*\.ffn_down_exps\.weight=iq4_ks
196
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
197
+
198
+ # NextN MTP Layer [92]
199
+ blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
200
+ blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
201
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
202
+
203
+ # Non-Repeating Layers
204
+ token_embd\.weight=iq4_k
205
+ output\.weight=iq6_k
206
+ "
207
+
208
+ custom=$(
209
+ echo "$custom" | grep -v '^#' | \
210
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
211
+ )
212
+
213
+ numactl -N 1 -m 1 \
214
+ ./build/bin/llama-quantize \
215
+ --custom-q "$custom" \
216
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
217
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
218
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ4_KSS.gguf \
219
+ IQ4_KSS \
220
+ 192
221
+ ```
222
+
223
+ </details>
224
+
225
+ ## IQ3_KT TODO
226
+ Final estimate: PPL = TODO
227
+
228
+ Designed for Dual RTX 6000 Pro Blackwell 192GB VRAM full offload.
229
+ <details>
230
+
231
+ <summary>πŸ‘ˆ Secret Recipe</summary>
232
+
233
+ ```bash
234
+ #!/usr/bin/env bash
235
+
236
+ custom="
237
+ # 93 Repeating Layers [0-92]
238
+
239
+ # Attention
240
+ blk\.(0|1|2)\.attn_q.*=q8_0
241
+ blk\.(0|1|2)\.attn_k.*=q8_0
242
+ blk\.(0|1|2)\.attn_v.*=q8_0
243
+ blk\.(0|1|2)\.attn_output.*=q8_0
244
+
245
+ blk\..*\.attn_q.*=iq5_ks
246
+ blk\..*\.attn_k.*=q8_0
247
+ blk\..*\.attn_v.*=q8_0
248
+ blk\..*\.attn_output.*=iq5_ks
249
+
250
+ # First 3 Dense Layers [0-2]
251
+ blk\..*\.ffn_down\.weight=iq5_ks
252
+ blk\..*\.ffn_(gate|up)\.weight=iq4_ks
253
+
254
+ # Shared Expert Layers [3-92]
255
+ blk\..*\.ffn_down_shexp\.weight=iq5_ks
256
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks
257
+
258
+ # Routed Experts Layers [3-92]
259
+ blk\..*\.ffn_down_exps\.weight=iq4_kss
260
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq3_kt
261
+
262
+ # NextN MTP Layer [92]
263
+ blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
264
+ blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
265
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
266
+
267
+ # Non-Repeating Layers
268
+ token_embd\.weight=iq4_k
269
+ output\.weight=iq6_k
270
+ "
271
+
272
+ custom=$(
273
+ echo "$custom" | grep -v '^#' | \
274
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
275
+ )
276
+
277
+ numactl -N 1 -m 1 \
278
+ ./build/bin/llama-quantize \
279
+ --custom-q "$custom" \
280
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
281
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
282
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ3_KT.gguf \
283
+ IQ3_KT \
284
+ 192
285
+ ```
286
+
287
+ </details>
288
+
289
+ ## IQ2_KL TODO
290
+ Final estimate: PPL = TODO
291
+
292
+ <details>
293
+
294
+ <summary>πŸ‘ˆ Secret Recipe</summary>
295
+
296
+ ```bash
297
+ #/usr/bin/env bash
298
+
299
+ custom="
300
+ # 93 Repeating Layers [0-92]
301
+
302
+ # Attention
303
+ blk\..*\.attn_q.*=iq5_ks
304
+ blk\..*\.attn_k.*=iq5_ks
305
+ blk\..*\.attn_v.*=iq5_ks
306
+ blk\..*\.attn_output.*=iq5_ks
307
+
308
+ # First 3 Dense Layers [0-2]
309
+ blk\..*\.ffn_down\.weight=iq5_ks
310
+ blk\..*\.ffn_(gate|up)\.weight=iq4_ks
311
+
312
+ # Shared Expert Layers [3-92]
313
+ blk\..*\.ffn_down_shexp\.weight=iq5_ks
314
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks
315
+
316
+ # Routed Experts Layers [3-92]
317
+ blk\..*\.ffn_down_exps\.weight=iq3_k
318
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
319
+
320
+ # NextN MTP Layer [92]
321
+ blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
322
+ blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
323
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
324
+
325
+ # Non-Repeating Layers
326
+ token_embd\.weight=iq4_k
327
+ output\.weight=iq6_k
328
+ "
329
+
330
+ custom=$(
331
+ echo "$custom" | grep -v '^#' | \
332
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
333
+ )
334
+
335
+ numactl -N 1 -m 1 \
336
+ ./build/bin/llama-quantize \
337
+ --custom-q "$custom" \
338
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
339
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
340
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ2_KL.gguf \
341
+ IQ2_KL \
342
+ 192
343
+ ```
344
+
345
+ </details>
346
+
347
+ ## IQ2_KS TODO
348
+ Final estimate: PPL = TODO
349
+
350
+ <details>
351
+
352
+ <summary>πŸ‘ˆ Secret Recipe</summary>
353
+
354
+ Used PR624 https://github.com/ikawrakow/ik_llama.cpp/pull/624
355
+
356
+ ```bash
357
+ custom="
358
+ #/usr/bin/env bash
359
+
360
+ # 93 Repeating Layers [0-92]
361
+
362
+ # Attention
363
+ blk\..*\.attn_q.*=iq5_ks
364
+ blk\..*\.attn_k.*=iq5_ks
365
+ blk\..*\.attn_v.*=iq5_ks
366
+ blk\..*\.attn_output.*=iq5_ks
367
+
368
+ # First 3 Dense Layers [0-2]
369
+ blk\..*\.ffn_down\.weight=iq5_ks
370
+ blk\..*\.ffn_(gate|up)\.weight=iq4_ks
371
+
372
+ # Shared Expert Layers [3-92]
373
+ blk\..*\.ffn_down_shexp\.weight=iq5_ks
374
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks
375
+
376
+ # Routed Experts Layers [3-92]
377
+ blk\..*\.ffn_down_exps\.weight=iq3_ks
378
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
379
+
380
+ # NextN MTP Layer [92]
381
+ blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
382
+ blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
383
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
384
+
385
+ # Non-Repeating Layers
386
+ token_embd\.weight=iq4_k
387
+ output\.weight=iq6_k
388
+ "
389
+
390
+ custom=$(
391
+ echo "$custom" | grep -v '^#' | \
392
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
393
+ )
394
+
395
+ numactl -N 1 -m 1 \
396
+ ./build/bin/llama-quantize \
397
+ --custom-q "$custom" \
398
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
399
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
400
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ2_KS.gguf \
401
+ IQ2_KS \
402
+ 192
403
+ ```
404
+
405
+ </details>
406
+
407
+ ## IQ1_KT TODO
408
+ Final estimate: PPL = TODO
409
+
410
+ *Good luck everybody!* πŸ˜…
411
+
412
+ <details>
413
+
414
+ <summary>πŸ‘ˆ Secret Recipe</summary>
415
+
416
+ ```bash
417
+ #/usr/bin/env bash
418
+
419
+ custom="
420
+ # 93 Repeating Layers [0-92]
421
+
422
+ # Attention
423
+ blk\..*\.attn_q.*=iq4_kt
424
+ blk\..*\.attn_k.*=iq4_kt
425
+ blk\..*\.attn_v.*=iq4_kt
426
+ blk\..*\.attn_output.*=iq4_kt
427
+
428
+ # First 3 Dense Layers [0-2]
429
+ blk\..*\.ffn_down\.weight=iq4_kt
430
+ blk\..*\.ffn_(gate|up)\.weight=iq4_kt
431
+
432
+ # Shared Expert Layers [3-92]
433
+ blk\..*\.ffn_down_shexp\.weight=iq4_kt
434
+ blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_kt
435
+
436
+ # Routed Experts Layers [3-92]
437
+ blk\..*\.ffn_down_exps\.weight=iq2_kt
438
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
439
+
440
+ # NextN MTP Layer [92]
441
+ blk\..*\.nextn\.embed_tokens\.weight=iq4_kt
442
+ blk\..*\.nextn\.shared_head_head\.weight=iq4_kt
443
+ blk\..*\.nextn\.eh_proj\.weight=q8_0
444
+
445
+ # Non-Repeating Layers
446
+ token_embd\.weight=iq4_k
447
+ output\.weight=iq6_k
448
+ "
449
+
450
+ custom=$(
451
+ echo "$custom" | grep -v '^#' | \
452
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
453
+ )
454
+
455
+ numactl -N 0 -m 0 \
456
+ ./build/bin/llama-quantize \
457
+ --custom-q "$custom" \
458
+ --imatrix /mnt/raid/models/ubergarm/GLM-4.6-GGUF/imatrix-GLM-4.6-BF16.dat \
459
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-160x21B-4.5-BF16-00001-of-00015.gguf \
460
+ /mnt/raid/models/ubergarm/GLM-4.6-GGUF/GLM-4.6-IQ1_KT.gguf \
461
+ IQ1_KT \
462
+ 192
463
+ ```
464
+
465
+ </details>
466
+
467
+
468
+ ## Quick Start
469
+ If you want to disable thinking, add `/nothink` (correct, no underscore) at the *end* of your prompt. *TODO* confirm if this is still true for 4.6?
470
+
471
+ ```bash
472
+ # Clone and checkout
473
+ $ git clone https://github.com/ikawrakow/ik_llama.cpp
474
+ $ cd ik_llama.cpp
475
+
476
+ # Build for hybrid CPU+CUDA
477
+ $ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON -DGGML_BLAS=OFF -DGGML_SCHED_MAX_COPIES=1
478
+ $ cmake --build build --config Release -j $(nproc)
479
+
480
+ # Run API server
481
+ $ ./build/bin/llama-server \
482
+ --model GLM-4.6-IQ4_KSS-00001-of-00004.gguf \
483
+ --alias ubergarm/GLM-4.6-IQ4_KSS \
484
+ --ctx-size 32768 \
485
+ -fa -fmoe \
486
+ -ctk q8_0 -ctv q8_0 \
487
+ -ub 4096 -b 4096 \
488
+ -ngl 99 \
489
+ -ot exps=CPU \
490
+ --parallel 1 \
491
+ --threads 8 \
492
+ --host 127.0.0.1 \
493
+ --port 8080 \
494
+ --no-mmap
495
+
496
+ # MCP/Tool Use
497
+ # --jinja etc...
498
+ ```
499
+
500
+ ## References
501
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
502
+ * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
503
+ * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)