Granite-4.0-1B

Model Summary: Granite-4.0-1B is a lightweight instruct model finetuned from Granite-4.0-1B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques including supervised finetuning, reinforcement learning, and model merging.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fine-tune Granite 4.0 Nano models to support languages beyond those included in this list.

Intended use: Granite 4.0 Nano instruct models feature strong instruction following capabilities bringing advanced AI capabilities within reach for on-device deployments and research use cases. Additionally, their compact size makes them well-suited for fine-tuning on specialized domains without requiring massive compute resources.

Capabilities

  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related tasks
  • Function-calling tasks
  • Multilingual dialog use cases
  • Fill-In-the-Middle (FIM) code completions

Generation:

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @huggingface/transformers

Example: Basic example

import { pipeline, TextStreamer } from "@huggingface/transformers";

// Create a text generation pipeline
const generator = await pipeline(
  "text-generation",
  "onnx-community/granite-4.0-1b-ONNX",
  { device: "webgpu", dtype: "q4" },
);

// Define the list of messages
const messages = [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "What is the capital of France?" },
];

// Generate a response
const output = await generator(messages, {
    max_new_tokens: 512,
    do_sample: false,
    streamer: new TextStreamer(generator.tokenizer, { skip_prompt: true, skip_special_tokens: true }),
});
console.log(output[0].generated_text.at(-1).content);
// The capital of France is Paris.

Evaluation Results:

Benchmarks Metric 350M Dense H 350M Dense 1B Dense H 1B Dense
General Tasks
MMLU 5-shot 35.01 36.21 59.39 59.74
MMLU-Pro 5-shot, CoT 12.13 14.38 34.02 32.86
BBH 3-shot, CoT 33.07 33.28 60.37 59.68
AGI EVAL 0-shot, CoT 26.22 29.61 49.22 52.44
GPQA 0-shot, CoT 24.11 26.12 29.91 29.69
Alignment Tasks
IFEval Instruct, Strict 61.63 67.63 80.82 82.37
IFEval Prompt, Strict 49.17 55.64 73.94 74.68
IFEval Average 55.4 61.63 77.38 78.53
Math Tasks
GSM8K 8-shot 30.71 39.27 76.35 69.83
GSM Symbolic 8-shot 26.76 33.7 72.3 65.72
Minerva Math 0-shot, CoT 13.04 5.76 45.28 49.4
DeepMind Math 0-shot, CoT 8.45 6.2 34 34.98
Code Tasks
HumanEval pass@1 39 38 74 73
HumanEval+ pass@1 37 35 69 68
MBPP pass@1 48 49 65 69
MBPP+ pass@1 38 44 57 60
CRUXEval-O pass@1 23.75 25.5 33.13 36
BigCodeBench pass@1 11.14 11.23 30.18 29.12
Tool Calling Tasks
BFCL v3 39.32 43.32 54.82 50.21
Multilingual Tasks
MULTIPLE pass@1 15.99 14.31 32.24 36.11
MMMLU 5-shot 28.23 27.95 45 49.43
INCLUDE 5-shot 27.74 27.09 42.12 43.35
MGSM 8-shot 14.72 16.16 37.84 27.52
Safety
SALAD-Bench 97.12 96.55 93.44 96.4
AttaQ 82.53 81.76 85.26 82.85
Multilingual Benchmarks and thr included languages:
Benchmarks # Langs Languages
MMMLU 11 ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE 14 hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM 5 en, es, fr, ja, zh

Model Architecture:

Granite-4.0-1B baseline is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

Model 350M Dense H 350M Dense 1B Dense H 1B Dense
Embedding size 1024 768 2048 1536
Number of layers 28 attention 4 attention / 28 Mamba2 40 attention 4 attention / 36 Mamba2
Attention head size 64 64 128 128
Number of attention heads 16 12 16 12
Number of KV heads 4 4 4 4
Mamba2 state size - 128 - 128
Number of Mamba2 heads - 48 - 48
MLP / Shared expert hidden size 2048 2048 4096 4096
Num. Experts - - - -
Num. active Experts - - - -
Expert hidden size - - - -
MLP activation SwiGLU SwiGLU SwiGLU SwiGLU
Sequence length 32K 32K 128K 128K
Position embedding RoPE NoPE RoPE NoPE
# Parameters 350M 340M 1.6B 1.5B
# Active parameters 350M 340M 1.6B 1.5B

Training Data: Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.

Infrastructure: We trained the Granite 4.0 Nano Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite 4.0 Nano Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.

Resources

Downloads last month
1,494
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for onnx-community/granite-4.0-1b-ONNX-web

Quantized
(18)
this model

Spaces using onnx-community/granite-4.0-1b-ONNX-web 3