ICEBLINK

VERSION 2

image

Overview

Another re-attempt at GLM 4.5 Air. This time using a different training framework, some updated data and better hyperparameters.

This model is a creative writing and RP model. It's pretty verbose. The intent is to keep the behavior of the original model, but to improve writing, dialogue & creativity.

Compared to the original Iceblink, the effect on this one is more pronounced, with hopefully minimal impact on the intelligence.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.8
> MinP: 0.05
> TopP: 0.95 - 1.00

Instruct

GLM4.5 (no thinking): SillyTavern Preset

Quantizations

Creation Process

Creation Process: SFT

SFT on approx 13 million tokens, SFW / NSFW RP, stories, creative instruct & chat data. Some of the SFW datasets are public and can be found in the model datasets list.

I've switched over from Axolotl to MS-Swift w/ Megatron to train MoE models now. There's a roughly 5-10x speedup in training the models, thanks to escaping the naive MoE implementation in TRL. The training time for this run took only 40 minutes, excluding environment setup time.

A low LR for GLM Air appears to be king. Going any higher, I've found it extremely easy to begin overcooking the model.

Special Thanks

A shoutout to the people in BeaverAI discord that helped me test this model and my intermediate versions.

ddh0 (Madison), Ambius, Dysfunctional & my dude.

Downloads last month
8
Safetensors
Model size
107B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/GLM-4.5-Iceblink-v2-106B-A12B

Finetuned
(25)
this model
Quantizations
1 model

Datasets used to train zerofata/GLM-4.5-Iceblink-v2-106B-A12B

Collection including zerofata/GLM-4.5-Iceblink-v2-106B-A12B