Add GPTQModel W4A16 INT format Quant ?
#13
by
X-SZM
- opened
While quantization formats such as GGUF and AWQ are already available in the community, models in GPTQModel formats like W4A16 and W8A16 remain notably scarce. These formats demonstrate significant advantages in quantization effectiveness and reduced precision loss. Given that quantizing models using GPTQModel formats requires data calibration—a process challenging for general users to complete—it would be beneficial for organized community groups to share models in W4A16 and W8A16 formats. Such contributions would be highly anticipated.
X-SZM
changed discussion title from
Add W4A16 Quant ?
to Add llm-compressor W4A16 INT format Quant ?
X-SZM
changed discussion title from
Add llm-compressor W4A16 INT format Quant ?
to Add GPTQModel W4A16 INT format Quant ?
Will be doing this shortly, watch quantizations, I'll post tomorrow after I've done some basic tests to ensure the quant doesn't break the model at W4A16 grp128. Will be RDNA 4 compatible.