PAINTED FANTASY v3

Magistral-Small-2509-24B

image

Overview

This is an uncensored creative model intended to excel at character driven RP / ERP.

Magistral is a really good model, never got around to making a finetune for it though.

Lots of improvements and experimental additions to the dataset along better overall training parameters. IMO, this feels very different to v2 in a good way.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Suggested Samplers

> Temp: 0.7-0.8
> MinP: 0.075
> TopP: 0.95-1.00

Instruct

Mistral v7 Tekken

Quantizations

EXL3

> 3bpw
> 3.5bpw
> 4bpw
> 5bpw
> 6bpw

Training Process

Training process: SFT > DPO

The SFT dataset has been greatly expanded from previous models. 31mil tokens, 25mil trainable. Uses rslora and trains all modules, including lm_head & embed_tokens (at a lower LR).

SFT dataset consists of RP/ERP, Stories, in character assistant data, anime & vtuber AMA's and Nitral's Reddit NSFW writing prompts (slightly modified).

DPO focused on reducing repetition, misgendered characters, parroting and general logic issues. Chosen responses are high quality ERP / RP that are self edited, rejected are MS3.2 outputs, instructed to make mistakes / ignore instructions.

Downloads last month
147
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/MS3.2-PaintedFantasy-v3-24B

Datasets used to train zerofata/MS3.2-PaintedFantasy-v3-24B

Collection including zerofata/MS3.2-PaintedFantasy-v3-24B