This collection contains the models quantized with the SINQ quantization method.
AI & ML interests
None defined yet.
Recent Activity
Papers
SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights
Top-Theta Attention: Sparsifying Transformers by Compensated Thresholding
models
15
huawei-csl/Apertus-8B-2509-4bit-ASINQ
Text Generation
•
5B
•
Updated
•
10
•
2
huawei-csl/Apertus-8B-2509-4bit-SINQ
Text Generation
•
5B
•
Updated
•
8
•
2
huawei-csl/Qwen3-235B-A22B-3bit-SINQ
Text Generation
•
Updated
•
50
•
2
huawei-csl/Qwen3-32B-4bit-ASINQ
Text Generation
•
18B
•
Updated
•
67
•
8
huawei-csl/Qwen3-32B-4bit-SINQ
Text Generation
•
18B
•
Updated
•
80
•
7
huawei-csl/Qwen3-14B-4bit-ASINQ
Text Generation
•
9B
•
Updated
•
74
•
6
huawei-csl/Qwen3-14B-4bit-SINQ
Text Generation
•
9B
•
Updated
•
43
•
5
huawei-csl/Qwen3-1.7B-4bit-SINQ
Text Generation
•
1B
•
Updated
•
45
•
5
huawei-csl/Qwen3-32B-3bit-ASINQ
Text Generation
•
6B
•
Updated
•
136
•
5
huawei-csl/Qwen3-1.7B-4bit-ASINQ
Text Generation
•
1B
•
Updated
•
49
•
5
datasets
0
None public yet