Join our Discord! https://discord.gg/BeaverAI

More than 6000 members strong πŸ’ͺ A hub for users and makers alike!


Drummer proudly presents...

Cydonia 24B v3.1 πŸ’Ώ

image/png

When you see beauty in desolation, it changes something inside you. Desolation tries to colonize you.

Usage

  • Mistral v7 Tekken
  • Reasoning is optional and not automatic. It performs well, even as a non-reasoning model.
  • <think> capable*

* thinking capabilities may require some prompt wrangling and prefill

Description

It plays characters well and will drive the story with or without thinking mode. it will get spicy when the scenario calls for it and will back down when it makes sense to. it will also follow prompts well and will generally format messages as expected. I use it as a daily driver despite not having very powerful hardware simply because it is that good!

Wonderful character focus and logic. Even without using thinking mode, it is still powerful. The current top choice in 2025.

For 24gb users, it's really comfy to use. Can gen quickly and the output is really enjoyable across different cards. In the event you wanna swipe to check other stuff, it doesn't take too long and the tradeoff vs bigger models isn't as tangible, though I don't really go beyond like 30b level stuff.

Performs really nicely at Q6, could definitely see this being one of my main rotation

Think Prefill Example

<think>
Okay, so this is a roleplay scenario where

(YMMV with this prefill. The idea is to frame its reasoning with a third-person perspective and remind it that it's roleplaying. It's a personal preference.)

Warning: Keep in mind that <think> is not a special token. This means it will be treated like an XML tag and that may or may not help your initial prompt

You can find premade settings in the BeaverAI server: https://discord.com/channels/1238219753324281886/1382664993853407293/1387111905902198854

Special Thanks

Links

Moistier Alternative: https://huggingface.co/BeaverAI/Cydonia-24B-v3i-GGUF

config-v3j

Downloads last month
2,004
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for TheDrummer/Cydonia-24B-v3.1-GGUF