G3-27B-Animus-V12.0-GGUF
Send me your support to help me feed the data beast! also taking comissions for universe specific models
Support on Ko-fiQuantized Models
The quantized model files are available for download. Click the buttons below to view the files.
Download EXL3 Files →Character Card & Lore Book
For the best roleplaying experience, it is highly recommended to use the provided character card and lore book. These files help guide the model's persona and provide rich, in-universe context.
Download Files →Sampler Presets
For a seamless setup in SillyTavern, you can download pre-configured sampler presets. These are tuned to provide an optimal balance between creativity and narrative coherence for this model.
Simply download the .json file below and import it into SillyTavern's sampler presets menu.
- Creative Alternative if supported:
Temp: 0.8-1.2
Min P: 0.02
Dry: 0.8 , 1.75, 4
Temp: 1
Min P: 0.03
Nsigma: 2
Dry: 0.8 , 1.75, 4
Roleplay Format Guide
For the best results, use this structured format. This helps the AI clearly distinguish between actions, inner thoughts, and dialogue.
- Actions / Descriptions
*He walked across the room and stared out the window.*- Inner Thoughts
*-I wonder what she's thinking.-*- Dialogue
Alex (Curious): "What do you see out there?"
Standard novel-style formatting is also understood, but this structured format is preferred for clarity.
Model Description
This is Version 12.0, in the Animus series. V12.0 is a direct fine-tune of Gemma-3-27B-it.
V12.0's strength comes from a novel dataset designed to teach the model the why behind the lore, not just the what. The training data is a mix of:
- A 3,000-example Q&A dataset: This data is framed as an in-character study session, like a student at Jade Mountain Academy learning about the history, relationships, and politics of Pyrrhia's tribes. This provides a deep, contextual understanding of the universe.
- A 3,000-example uncensored roleplay dataset: The same high-quality, mature roleplay scenarios used in previous versions, ensuring the model maintains its engaging and dynamic narrative capabilities.
The result is a model with exceptionally strong prose and a deep grasp of in-universe lore, making for a highly immersive and accurate roleplaying experience.
Note for roleplay, it follows system prompt and first message, meaning if the first assistant message is short, the following messages will be short.
Training Details
V12.0 Training Process
V12.0 marks a shift from model merging to a focused, direct fine-tuning approach. This allows for greater control over the final model's characteristics.
- Base Model: Gemma-3-27B-it
- Hardware: 1x NVIDIA B200
- Training Time: 6 hours
- Epochs: 2
- LoRA Rank: 128
- Context size 8192
- Scheduler: Cosine
Feature Update: Removal of DM Choices
A key feature in previous test versions—the presentation of multiple-choice actions (e.g., A, B, C) to guide the user—has been removed.
While a promising concept, this feature needs further refinement to ensure it enhances, rather than restricts, the roleplaying experience. It may be reintroduced in a more polished form in a future release. For now, the model returns to a more traditional, open-ended prose format.
Training Dataset
The V12.0 dataset consists of 6,000 high-quality examples, a combination of two distinct types:
- In-Character Q&A (3,000 examples): This new dataset simulates a student at Jade Mountain Academy studying the world's lore. It's composed of roleplay-style questions and answers covering tribe history, family dynamics, and political relationships. This method builds a foundational, interconnected understanding of the lore.
- Uncensored Roleplay (3,000 examples): This is the same mature, canon-centric dataset refined for previous versions. It explores pivotal "what-if" scenarios from the books using only canon characters, ensuring the model can handle complex and dramatic narratives.
Both datasets underwent a rigorous cleaning process to remove formatting artifacts, such as **scene transitions**, resulting in a cleaner and more natural narrative style.
Intended Use & Limitations
- Intended Use: The primary purpose of this model is for creative and roleplaying within the Wings of Fire universe. However, user feedback indicates it is also highly effective for general-purpose roleplaying.
- Limitations & Quirks:
- Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.
- Versatility: While it appears to be only a Wings of Fire tuned model, users have reported it is very capable of performing normal roleplay with other settings and characters.
- The model may "hallucinate" or generate plausible but non-canonical information, especially when pushed outside the established "what-if" scenarios.
- Content: The training data includes mature and darker themes from the Wings of Fire series, such as conflict, character death, and moral ambiguity. The model is capable of generating content reflecting these themes. As always, it is up to the user what they do with it.
- Formatting: Training data was cleaned to remove narrative artifacts like
**scene transitions**. The model should now produce cleaner prose. - Safety: This model has not undergone additional safety alignment beyond what was included in its base model. Standard responsible AI practices should be followed.
Acknowledgements
- Credit to Google for the powerful Gemma architecture.
- Credit to Google for the Gemini Pro model, used in dataset generation.
- Downloads last month
- 646
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for Darkhn-Quants/G3-27B-Animus-V12.0-GGUF
Base model
google/gemma-3-27b-pt