opt-30b-w4g128-AutoRound / quantization_config.json
Emanresu's picture
Upload folder using huggingface_hub
179a736 verified
raw
history blame contribute delete
246 Bytes
{
"bits": 4,
"group_size": 128,
"sym": true,
"data_type": "int",
"iters": 1000,
"nsamples": 512,
"low_gpu_mem_usage": true,
"autoround_version": "0.7.1",
"quant_method": "auto-round",
"packing_format": "auto_round:auto_gptq"
}