gpt-oss-20b-2048-Calibration-FP8

Premium FP8 quantization with 2,048-sample calibration across 4 diverse datasets

This is a premium FP8 quantized version of openai/gpt-oss-20b featuring rigorous multi-dataset calibration for production-grade reliability. Quantized by TevunahAi on enterprise-grade hardware.

🎯 Recommended Usage: vLLM

For optimal performance with full FP8 benefits and premium calibration quality, use vLLM or TensorRT-LLM:

Quick Start with vLLM

pip install vllm

Python API:

from vllm import LLM, SamplingParams

# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/gpt-oss-20b-2048-Calibration-FP8", dtype="auto")

# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/gpt-oss-20b-2048-Calibration-FP8")
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate([prompt], sampling_params)

for output in outputs:
    print(output.outputs[0].text)

OpenAI-Compatible API Server:

vllm serve TevunahAi/gpt-oss-20b-2048-Calibration-FP8 \
    --dtype auto \
    --max-model-len 8192

Then use with OpenAI client:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",  # dummy key
)

response = client.chat.completions.create(
    model="TevunahAi/gpt-oss-20b-2048-Calibration-FP8",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    temperature=0.7,
    max_tokens=512,
)

print(response.choices[0].message.content)

vLLM Benefits

  • Weights, activations, and KV cache in FP8
  • ~20GB VRAM (50% reduction vs BF16)
  • Native FP8 tensor core acceleration on Ada/Hopper GPUs
  • Single GPU deployment on RTX 4090, RTX 5000 Ada, or H100
  • Premium 2048-sample calibration for production reliability
  • Production-grade performance

⚙️ Alternative: Transformers (Not Recommended)

This model can be loaded with transformers, but will decompress FP8 → BF16 during inference, requiring ~40GB+ VRAM. For 20B models, vLLM is strongly recommended.

Transformers Example (Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loads FP8 weights but decompresses to BF16 during compute
model = AutoModelForCausalLM.from_pretrained(
    "TevunahAi/gpt-oss-20b-2048-Calibration-FP8",
    device_map="auto",
    torch_dtype="auto",
    low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/gpt-oss-20b-2048-Calibration-FP8")

# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Requirements:

pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors

System Requirements:

  • ~40GB+ VRAM (decompressed to BF16)
  • Multi-GPU setup or A100/H100
  • CUDA 11.8 or newer

⚠️ Warning: vLLM is the recommended deployment method for 20B models.

📊 Model Details

Property Value
Base Model openai/gpt-oss-20b
Architecture Dense (20B parameters)
Quantization Method FP8 E4M3 weight-only
Framework llm-compressor + compressed_tensors
Calibration Samples 2,048 (4-8x industry standard)
Calibration Datasets 4 diverse sources
Storage Size ~20GB (sharded safetensors)
VRAM (vLLM) ~20GB
VRAM (Transformers) ~40GB+ (decompressed to BF16)
Target Hardware NVIDIA RTX 4090, RTX 5000 Ada, H100
Quantization Time 60.6 minutes (~1 hour)

🏆 Premium Calibration

This model was quantized using TevunahAi's premium multi-dataset calibration process:

Calibration Details

  • Total Samples: 2,048 (4-8x industry standard)
  • Datasets Used: 4 complementary sources
  • Coverage: Comprehensive across all use cases
Dataset Samples Purpose
Open-Platypus 512 STEM reasoning and logic
UltraChat-200k 512 Natural conversations
OpenHermes-2.5 512 Instruction following
SlimOrca 512 Diverse general tasks

Why Premium Calibration?

Most FP8 quantizations use 128-512 samples from a single dataset. TevunahAi uses 2,048 samples across 4 diverse datasets, ensuring:

  • Superior robustness across task types
  • Better statistical coverage for quantization scales
  • Minimal quality loss compared to FP16
  • Production-grade reliability
  • Consistent performance on edge cases

When quality matters, choose TevunahAi premium calibration quantizations.

🔧 Why FP8 for 20B Models?

With vLLM/TensorRT-LLM:

  • 50% memory reduction vs BF16 (weights + activations + KV cache)
  • Single GPU deployment on RTX 4090 (24GB) or RTX 5000 Ada (32GB)
  • Faster inference via native FP8 tensor cores
  • Better throughput with optimized kernels
  • Premium calibration maintains quality

With Transformers:

  • Smaller download size (~20GB vs ~40GB BF16)
  • Compatible with standard transformers workflow
  • ⚠️ Decompresses to BF16 during inference (no runtime memory benefit)
  • Requires 40GB+ VRAM - impractical for most setups

For 20B models, vLLM is essential for practical deployment.

💾 Model Files

This model is sharded into multiple safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.

🌟 About GPT-OSS

GPT-OSS-20B is part of OpenAI's open-source model release, offering:

  • Strong general-purpose capabilities
  • Efficient 20B parameter architecture
  • Excellent instruction following
  • Broad task coverage
  • Apache 2.0 license for commercial use

🔬 Quantization Infrastructure

Professional hardware for premium calibration:

  • CPUs: Dual Intel Xeon Max 9480 (224 threads, 128GB HBM2e @ 2000 GB/s)
  • Memory: 256GB DDR5-4800 (16 DIMMs, 8-channel per socket, ~614 GB/s)
  • Total Memory Bandwidth: ~2,614 GB/s aggregate
  • Peak Memory Usage: ~190GB during quantization (model + calibration datasets)
  • GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
  • Software: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor

Why This Matters:

  • 60.6 minutes of rigorous quantization and validation
  • 2,048-sample calibration requires significant computational resources
  • Professional infrastructure enables production-grade quantization quality

📚 Original Model

This quantization is based on openai/gpt-oss-20b by OpenAI.

For comprehensive information about:

  • Model architecture and training methodology
  • Capabilities and use cases
  • Evaluation benchmarks
  • Ethical considerations

Please refer to the original model card.

🔧 Hardware Requirements

Minimum (vLLM):

  • GPU: NVIDIA RTX 4090 (24GB) or RTX 5000 Ada (32GB)
  • VRAM: 20GB minimum, 24GB+ recommended
  • CUDA: 11.8 or newer

Recommended (vLLM):

  • GPU: NVIDIA RTX 5000 Ada (32GB) / H100 (80GB)
  • VRAM: 24GB+
  • CUDA: 12.0+

Transformers:

  • GPU: Multi-GPU setup or A100 (40GB+)
  • VRAM: 40GB+ (single GPU) or distributed
  • Not recommended for practical deployment

📖 Additional Resources

📄 License

This model inherits the Apache 2.0 License from the original GPT-OSS model.

🙏 Acknowledgments

  • Original Model: OpenAI
  • Quantization Framework: Neural Magic's llm-compressor
  • Quantized by: TevunahAi

📝 Citation

If you use GPT-OSS, please cite the original work:

@misc{gptoss2024,
  title={GPT-OSS: OpenAI's Open-Source Model Release},
  author={OpenAI},
  year={2024},
  url={https://huggingface.co/openai/gpt-oss-20b}
}

🌟 Why TevunahAi Premium Calibration FP8?

The Difference is in the Details

Aspect Standard FP8 TevunahAi Premium FP8
Calibration Samples 128-512 2,048
Datasets Single 4 diverse
Calibration Time Minutes 60+ minutes
Edge Case Handling Adequate Superior
Output Consistency Good Excellent
Production Ready Maybe Absolutely
Infrastructure Consumer/Prosumer Enterprise-grade

Professional Infrastructure

  • 2.6 TB/s aggregate memory bandwidth
  • 190GB peak usage during 20B quantization
  • 2,048 samples across 4 complementary datasets
  • Quality-first approach over speed
  • Enterprise-ready results

When deploying 20B models in production, accept no compromises.


Professional AI Model Quantization by TevunahAi

Premium multi-dataset calibration on enterprise-grade infrastructure

View all models | Contact for custom quantization

Downloads last month
45
Safetensors
Model size
21B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TevunahAi/gpt-oss-20b-2048-Calibration-FP8

Base model

openai/gpt-oss-20b
Quantized
(140)
this model

Collection including TevunahAi/gpt-oss-20b-2048-Calibration-FP8