temp
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the NuSLERP merge method using mlabonne/gemma-3-12b-it-abliterated as a base.
Well, it's been a while, hasn't it? It's only been a short 5 months since I posted the last Nemesia version, saying the famous last words
I will try swapping out a model or two in the merge and trying again to upload as a v2.0.
Then, exams and university stuff crushed me whole which wasn't great, and I didn't have any time or computational resources to get back to merging on a reasonable timeframe until now.
Honestly, in the time since, Qwen2.5-7B really has not turned out to be all that interesting, Mistral Nemo stealing most of its thunder. So, I decided to update my Nemesia mergeset with a new base: Gemma3-12B! In my experience, it has been alright, so I'm excited to tinker with it.
In my testing, this thing is alright- I couldn't run any GGUFs of it since I don't even know what on earth has happened to my .venv
in the 5 months past, I'm pretty sure my Transformers installation
is corrupted, and I'm re-cloning llama.cpp as I write this. I ran the FP16 and it seemed coherent enough, but I leave that to you to decide.
Uses all of the special NuSLERP options because they're there, and is based on the abliterated version of Gemma3 instead of the base because I <3 mlabonne.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/gemma-3-12b-it-abliterated
parameters:
weight: 1.0
- model: qiuxi337/gemma-3-12b-it-grpo
parameters:
weight: 0.5
- model: google/gemma-3-12b-it
parameters:
weight: 0.2
merge_method: nuslerp
base_model: mlabonne/gemma-3-12b-it-abliterated
parameters:
normalize: true
int8_mask: true
nuslerp_flatten: false
nuslerp_row_wise: true
dtype: float16
- Downloads last month
- 19