This model is a quantized version of Llama-3.2-1B-Instruct. Code used for generation is as follows:

from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
import torch

model_id = "meta-llama/Llama-3.2-1B-Instruct"

quantization_config = GPTQConfig(
     bits=4,
     group_size=128,
     dataset="c4",
     desc_act=False,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
quant_model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config, device_map='auto')
Downloads last month
2
Safetensors
Model size
393M params
Tensor type
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for arishiki/Llama-3.2-1B-Instruct-quantized-gptq-4g01

Quantized
(276)
this model