Health Or Medicine GPT-OSS Model (19 Experts)

Project: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/

👥 Follow the Authors

Aman Priyanshu LinkedIn Twitter Website

Supriti Vijay LinkedIn Twitter Website

Introduction

This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset. We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for health or medicine tasks.

⚠️ Experimental Model: This is an experimental pruned model that may not work well - check the examples below to see if the outputs meet your needs before use.

This pruning approach reduces the model size while attempting to preserve performance on the target domain.

Model Architecture & Statistics

Metric Value
Base Model openai/gpt-oss-20b
Architecture Mixture-of-Experts Transformer
Total Parameters ~13.1B (pruned from 21B)
Original Experts per Layer 32
Pruned Experts per Layer 19
Layers 24
Top-k Routing 4
Context Length 128K tokens
Attention Heads 64 (Query), 8 (Key-Value)
Residual Dimension 2880
Attention Pattern Alternating dense & sliding window (128 tokens)
Positional Encoding RoPE (Rotary Position Embedding)
Normalization RMSNorm
Precision BF16
License Apache 2.0
Specialization Health Or Medicine

Pruning Methodology

What is Expert Pruning?

Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:

  1. Analyzing Usage Patterns: Tracking which experts activate most frequently for specific tasks
  2. Removing Underutilized Experts: Discarding experts with low activation rates for the target domain
  3. Preserving Router Functionality: Maintaining the routing mechanism with fewer available experts

Our Approach

  • Data-Driven Selection: Used activation patterns from health or medicine evaluation tasks
  • Systematic Reduction: Reduced from 32 to 19 experts per layer
  • No Retraining: Direct removal without additional training steps

Performance & Applications

Pruning Benefits

  • Smaller Memory Footprint: 59.4% of original expert parameters
  • Reduced Computational Load: Fewer routing decisions during inference
  • Focused Capabilities: Retains experts relevant to health or medicine tasks

Use Cases

  • Speculative Decoding: Draft model for full GPT-OSS-20B
  • Resource-Constrained Deployment: Edge devices, mobile applications
  • Research: Study expert specialization in MoE models
  • Fine-tuning: Smaller base model for domain adaptation

Note: Performance may vary depending on how well the pruned experts match your specific use case.

Motivation & Expert Selection

This medical domain model incorporates experts that activated highly during health and medical tasks from MMLU medical subjects. These experts specialize in clinical knowledge, anatomy, medical procedures, and health-related reasoning.

The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:

  • GPQA: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
  • MMLU/MMLU-Pro: Comprehensive knowledge across 57+ subjects including science, medicine, law
  • SORRY-Bench: Safety evaluation across harmful content categories
  • Tulu3: Persona-driven instruction following with verifiable constraints
  • Polyglot-or-Not: Multilingual factual completion tasks

By identifying experts that consistently activated for health or medicine tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer.

Dataset & Analysis Foundation

This model is based on analysis from the GPT-OSS-20B MoE Expert Activations dataset available at: 🔗 https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations

The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.

Pruning Methodology

Our approach involves:

  1. Activation Analysis: Comprehensive evaluation of expert usage patterns across domain-specific tasks
  2. Expert Ranking: Identification of the most frequently activated experts for target domains
  3. Systematic Pruning: Reduction from 32 to 19 experts while preserving router functionality
  4. Quality Validation: Testing to ensure maintained performance on target tasks

This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.

Usage

CPU Inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
    "AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts", 
    torch_dtype=torch.bfloat16, 
    device_map="cpu", 
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts")

# Generate with the model
messages = [
    {"role": "user", "content": "What are the main functions of the human heart?"}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True, 
    return_tensors="pt", 
    return_dict=True,
    reasoning_effort="medium"
)

# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}

outputs = model.generate(
    **inputs, 
    max_new_tokens=512,
    do_sample=True,
    temperature=0.1,
    top_p=0.9,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.eos_token_id
)

# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)

Apple Silicon (MPS) Inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"

model = AutoModelForCausalLM.from_pretrained(
    "AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts", 
    torch_dtype=torch.float16,  # Better MPS compatibility
    device_map=device, 
    trust_remote_code=True,
    low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts")

# Generate with the model
messages = [
    {"role": "user", "content": "What are the main functions of the human heart?"}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True, 
    return_tensors="pt", 
    return_dict=True,
    reasoning_effort="medium"
)

# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}

# Use torch.no_grad for MPS stability
with torch.no_grad():
    outputs = model.generate(
        **inputs, 
        max_new_tokens=512,
        do_sample=True,
        temperature=0.1,
        top_p=0.9,
        pad_token_id=tokenizer.eos_token_id,
        eos_token_id=tokenizer.eos_token_id,
        use_cache=True
    )

# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)

GPU Inference

For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:

device_map="auto"  # Will automatically use GPU if available
torch_dtype=torch.bfloat16  # or torch.float16

Model Examples

The following examples demonstrate the model's performance on health or medicine tasks:

Example 1

Question: What are the main functions of the human heart?

Response:

<|channel|>analysis<|message|>Need concise answer.<|end|><|start|>assistant<|channel|>final<|message|>The human heart performs three core functions that keep the body alive and functioning:

| Function | What it does | How it works |
|----------|--------------|--------------|
| **Pumping (circulation)** | Moves blood throughout the body, delivering oxygen, nutrients, hormones, and removing waste. | The left ventricle ejects oxygen‑rich blood into the aorta; the right ventricle sends oxygen‑poor blood into the pulmonary artery. |
| **Pressure regulation** | Maintains blood pressure within a range that allows tissues to receive adequate blood flow. | The heart’s contractility, heart rate, and the resistance of the vascular system together determine systolic and diastolic pressures. |
| **Electrical coordination** | Generates and propagates the electrical impulses that trigger each heartbeat, ensuring the chambers contract in a synchronized rhythm. | The sinoatrial (SA) node initiates the impulse, which travels through the atrioventricular (AV) node, bundle branches, and Purkinje fibers to the ventricles. |

Together, these functions keep the circulatory system operating efficiently, supporting all metabolic processes in the body.<|return|>

Example 2

Question: Explain the difference between bacteria and viruses.

Response:

<|channel|>analysis<|message|>Need concise explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Bacteria vs. Viruses – the key differences**

| Feature | Bacteria | Viruses |
|--------|----------|--------|
| **Basic nature** | Living, self‑replicating cells | Non‑living particles that need a host cell to reproduce |
| **Size** | ~0.2–2 µm (micrometers) | ~20–200 nm (nanometers) |
| **Structure** | Cell wall, membrane, cytoplasm, DNA (often circular), ribosomes, organelles (e.g., mitochondria in some) | Protein coat (capsid) surrounding genetic material (DNA or RNA); no cellular machinery |
| **Genetic material** | DNA (usually circular plasmids + chromosome) | DNA or RNA (single or double strand) |
| **Replication** | Independent: divide by mitosis (bacterial cell division) | Dependent: hijack host cell’s replication machinery |
| **Metabolism** | Can photosynthesize, oxidize, ferment, etc. | No metabolism; rely entirely on host cell |
| **Infection mechanism** | Can colonize surfaces, form biofilms, or invade tissues; often spread by contact, airborne droplets, or vectors | Attach to specific host receptors, penetrate, replicate inside host cell |
| **Treatment** | Antibiotics (target cell wall, protein synthesis, DNA replication, etc.) | Antiviral drugs (target viral enzymes or entry), vaccines; antibiotics ineffective |
| **Evolutionary classification** | Organisms (prokaryotes) | Pathogens (not classified as organisms) |
| **Examples** | *Escherichia coli*, *Staphylococcus aureus*, *Bacillus subtilis* | Influenza virus, HIV, SARS‑CoV‑2, HPV |

**Bottom‑line:**  
- **Bacteria** are true living cells that can survive and grow on their own.  
- **Viruses** are essentially “genetic packages” that need a living cell to make copies of themselves.<|return|>

Example 3

Question: What are the symptoms and causes of diabetes?

Response:

<|channel|>analysis<|message|>Need to answer.<|end|><|start|>assistant<|channel|>final<|message|>**Diabetes mellitus** is a chronic metabolic disorder in which the body cannot use glucose (sugar) properly.  The two main types—Type 1 and Type 2—have overlapping symptoms but differ in their underlying causes.  Below is a concise overview of the most common signs, symptoms, and the key factors that lead to each type.

---

## 1.  Symptoms (Signs that may indicate diabetes)

| Symptom | Typical Onset | Why It Happens |
|---------|---------------|----------------|
| **Polyphagia** (excessive hunger) | Often early | Body can’t use glucose → cells starve → hunger signals |
| **Polydipsia** (excessive thirst) | Early | High blood sugar pulls water out of cells → dehydration |
| **Polyuria** (frequent urination) | Early | Kidneys try to excrete excess glucose → osmotic diuresis |
| **Weight loss** (especially in Type 1) | Early | Body burns fat & muscle for energy |
| **Fatigue / weakness** | Early | Cells lack glucose → low energy |
| **Blurred vision** | Early‑mid | Hyperglycemia swells the lens, altering focus |
| **Slow wound healing** | Mid‑to‑late | Poor circulation & immune function |
| **Frequent infections** (skin, urinary, thrush) | Mid‑to‑late | High glucose supports bacterial growth; immune dysfunction |
| **Peripheral neuropathy** (tingling, numbness) | Late | Chronic hyperglycemia damages nerves |
| **Foot ulcers / amputations** | Late | Neuropathy + poor circulation |
| **Diabetic ketoacidosis** (Type 1) | Sudden | Insulin deficiency → fat breakdown → ketones → acidosis |
| **Hyperosmolar hyperglycemic syndrome** (Type 2) | Sudden | Extremely high glucose → dehydration, altered mental status |

> **Note:** Many people, especially with Type 2, may have *no symptoms* for years. Routine screening is essential if risk factors are present.

---

## 2.  Causes (Underlying mechanisms)

### Type 1 Diabetes (T1D)

| Cause | Key Features |
|-------|--------------|
| **Autoimmune destruction of β‑cells** | The immune system attacks insulin‑producing cells in the pancreas. |
| **Genetic predisposition** | Certain H

Citation

If you use this model in your research, please cite:

@misc{priyanshu2025gptoss,
  title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
  author={Priyanshu, Aman and Vijay, Supriti},
  year={2025},
  howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
  note={Interactive analysis tool for expert activation patterns in MoE architectures}
}

References & Resources

Downloads last month
13
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts

Collection including AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts