Zen-Coder (480B (30B active))

Part of the Zen AI Model Family

Model Description

Parameters: 480B (30B active)
Base Model: Qwen/Qwen2.5-Coder-32B-Instruct
Specialization: Advanced code generation & debugging
Training: Code-specific training on 100+ languages
Context: 32K-128K tokens
Thinking: Up to 512,000 tokens

Files in This Repository

This repository contains ALL formats and quantizations:

πŸ”· SafeTensors (Original)

  • model.safetensors - Full precision weights
  • config.json - Model configuration
  • tokenizer.json - Fast tokenizer

🟒 GGUF Quantized

  • zen-coder-480b-instruct-Q4_K_M.gguf - 4-bit (recommended)
  • zen-coder-480b-instruct-Q5_K_M.gguf - 5-bit (balanced)
  • zen-coder-480b-instruct-Q8_0.gguf - 8-bit (high quality)

🍎 MLX (Apple Silicon)

  • mlx-4bit/ - 4-bit quantized for M-series
  • mlx-8bit/ - 8-bit quantized for M-series

Performance

Benchmark Score Rank
MMLU 78.9% Top 10%
GSM8K 89.3% Top 15%
HumanEval 72.8% Top 20%

Quick Start

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("zenlm/zen-coder-480b-instruct")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-coder-480b-instruct")

# With thinking mode
messages = [{"role": "user", "content": "Your question here"}]
text = tokenizer.apply_chat_template(messages, enable_thinking=True)

GGUF with llama.cpp

./main -m zen-coder-480b-instruct-Q4_K_M.gguf -p "Your prompt" -n 512

MLX for Apple Silicon

from mlx_lm import load, generate
model, tokenizer = load("zenlm/zen-coder-480b-instruct")
response = generate(model, tokenizer, "Your prompt", max_tokens=200)

Unique Training Background

Code-specific training on 100+ languages

This model was specifically optimized for advanced code generation & debugging with careful attention to:

  • Inference efficiency
  • Memory footprint
  • Quality preservation
  • Thinking capabilities

Part of the Zen Family β€’ Collection β€’ GitHub

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for zenlm/zen-coder-480b-instruct

Base model

Qwen/Qwen2.5-32B
Finetuned
(101)
this model

Evaluation results