IQ2_XXS optimal for me
#17 opened 7 months ago
by
jweb
World's Largest Dataset
#16 opened 7 months ago
by
deleted
Re-converting the GGUF for MLA?
👍
6
3
#15 opened 7 months ago
by
Silver267
What tool/framework to test gguf models?
1
#14 opened 7 months ago
by
bobchenyx
Request: DOI
#13 opened 8 months ago
by
jeffhoule01
How to run ollama using these new quantized weights?
👀
1
2
#12 opened 8 months ago
by
vadimkantorov
Running Model "unsloth/DeepSeek-V3-0324-GGUF" with vLLM does not working
2
#11 opened 8 months ago
by
puppadas
The UD-IQ2_XXS is surprisingly good, but it's good to know that it degrades gradually but significantly after about 1000 tokens.
1
#9 opened 8 months ago
by
mmbela
671B params or 685B params?
6
#8 opened 8 months ago
by
createthis
how to run tools use correctly
#7 opened 8 months ago
by
rockcat-miao
How many bits of Quantization is enough for Code Generation Tasks?
1
#5 opened 8 months ago
by
luweigen
Added IQ1_S version to Ollama
3
#4 opened 8 months ago
by
Muhammadreza
Is the 2.51bit model using imatrix?
7
#3 opened 8 months ago
by
daweiba12
Will you release the imatrix.dat used for the quants?
2
#2 opened 8 months ago
by
tdh111
Would There be Dynamic Qunatized Versions like 2.51bit
8
#1 opened 8 months ago
by
MotorBottle