Model is currently being retrained from scratch, at the base model stock model.
✨ Model Summary
A lightweight, reasoning-focused language model fine-tuned on a curated blend of instruction data and character-driven prompts. Designed for both single-turn logic and multi-turn, in-character reasoning tasks.
Reasoning uses: <think>\n ... \n</think>\n response here
🧠Training Dataset
The model was fine-tuned on:
8,090 entries of mixed single-turn and multi-turn reasoning instructions, including problem-solving, logical inference, and step-by-step tasks.
3,178 entries of mixed single-turn and multi-turn in-character reasoning / RP-style instruction.
2,227 Multi-turn entries derived from character card reasoning.
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support