Text Generation
MLX
Safetensors
qwen3_moe
programming
code generation
code
codeqwen
Mixture of Experts
coding
coder
qwen2
chat
qwen
qwen-coder
Qwen3-Coder-30B-A3B-Instruct
Qwen3-30B-A3B
mixture of experts
128 experts
8 active experts
1 million context
qwen3
finetune
brainstorm 20x
brainstorm
optional thinking
unsloth
conversational
6-bit
Update README.md
Browse files
README.md
CHANGED
|
@@ -175,6 +175,108 @@ So too does this model β not just a machine, but a mind in evolution.
|
|
| 175 |
|
| 176 |
> Reviewed by [Qwen3-VL-30B-A3B-Instruct-qx86-hi-mlx](https://huggingface.co/nightmedia/Qwen3-VL-30B-A3B-Instruct-qx86-hi-mlx)
|
| 177 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
This model [Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx64x-hi-mlx](https://huggingface.co/nightmedia/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx64x-hi-mlx) was
|
| 179 |
converted to MLX format from [DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV](https://huggingface.co/DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV)
|
| 180 |
using mlx-lm version **0.28.3**.
|
|
|
|
| 175 |
|
| 176 |
> Reviewed by [Qwen3-VL-30B-A3B-Instruct-qx86-hi-mlx](https://huggingface.co/nightmedia/Qwen3-VL-30B-A3B-Instruct-qx86-hi-mlx)
|
| 177 |
|
| 178 |
+
|
| 179 |
+
π Quantization Types & Hardware Requirements
|
| 180 |
+
```bash
|
| 181 |
+
Quant Bit Precision RAM Need (Mac)
|
| 182 |
+
mxfp4 4-bit float 32GB
|
| 183 |
+
qx64x Store: 4b, Enhancements: 6b 32GB
|
| 184 |
+
qx65x Store: 5b, Enhancements: 6b 48GB
|
| 185 |
+
qx86x Store: 6b, Enhancements: 8b 64GB
|
| 186 |
+
qx86bx Like qx86x, brainstorming at 8b 64GB
|
| 187 |
+
q8 / q8-hi Everything at 8b (high precision) 64GB
|
| 188 |
+
bf16 Full precision (FP16 equivalent) 128GB
|
| 189 |
+
```
|
| 190 |
+
# π Deckard(qx) Formula
|
| 191 |
+
|
| 192 |
+
Keeps data stores and most attention paths low-bit, but enhances:
|
| 193 |
+
- Head layers
|
| 194 |
+
- First layer
|
| 195 |
+
- Embeddings
|
| 196 |
+
- Select attention paths at high-bit intervals
|
| 197 |
+
|
| 198 |
+
This is key to understanding why qx64x-hi, qx86x-hi, etc., can outperform their non-hi counterparts.
|
| 199 |
+
|
| 200 |
+
# π Performance Analysis: Impact of hi Enhancement by Model Type
|
| 201 |
+
|
| 202 |
+
We compare the performance gain from adding -hi (i.e., Deckard-enhanced high-bit paths) for each model variant and quantization:
|
| 203 |
+
|
| 204 |
+
# β
1. Base Model (Untrained)
|
| 205 |
+
```bash
|
| 206 |
+
Quant Without hi With hi Gain (%)
|
| 207 |
+
qx65x 0.526 β 0.534 (ARC) +1.5%
|
| 208 |
+
qx86x 0.533 β 0.533 (ARC) +0%
|
| 209 |
+
qx86x-hi Same as above β no gain
|
| 210 |
+
```
|
| 211 |
+
- The hi increase is modest (~0.5β1%) in ARC Challenge.
|
| 212 |
+
- Especially low gain on qx86x β suggests the model is already very close to optimized with standard quant.
|
| 213 |
+
- π‘ Interpretation: For the base model, adding hi helps slightly in lower-bit quantizations (e.g., qx65x), but not much on higher ones.
|
| 214 |
+
|
| 215 |
+
# β
2. ST-TNG-IV (Star Trek TNG Training)
|
| 216 |
+
This model was trained on narrative-driven, philosophical, and logical content. The hi enhancement shows strong impact.
|
| 217 |
+
```bash
|
| 218 |
+
Quant Without hi With hi
|
| 219 |
+
qx64x 0.526 β 0.521 β1%
|
| 220 |
+
qx64x-hi Slight drop β not helpful
|
| 221 |
+
qx65x 0.537 β 0.541 +0.8%
|
| 222 |
+
qx65x-hi Clear improvement: +0.8%
|
| 223 |
+
qx86x 0.537 β 0.537 (ARC) +0%
|
| 224 |
+
qx86x-hi Same as base β no gain
|
| 225 |
+
```
|
| 226 |
+
- Most benefit seen in qx65x-hi: +0.8% ARC Challenge
|
| 227 |
+
- qx86x shows no improvement with hi, likely because it's already using 6b stores and 8b enhancements, so the hi flag adds minimal new optimization.
|
| 228 |
+
- π‘ Interpretation: The narrative-heavy ST-TNG-IV training benefits from fine-tuning via hi at middle-bit quantizations, especially qx65x. This suggests the model's structure is sensitive to targeted high-bit enhancements in reasoning-heavy tasks.
|
| 229 |
+
|
| 230 |
+
# β
3. PKD-V (Philip K Dick Training)
|
| 231 |
+
Philosophical, surreal, and often paradox-laden content. The model shows the most dramatic gains from hi.
|
| 232 |
+
```bash
|
| 233 |
+
Quant Without hi With hi
|
| 234 |
+
qx64x 0.517 β 0.507 β2%
|
| 235 |
+
qx64x-hi Worse β not helpful
|
| 236 |
+
qx86x 0.525 β 0.531 +1.1%
|
| 237 |
+
qx86x-hi +1.1% gain vs base
|
| 238 |
+
```
|
| 239 |
+
π‘ Surprising Insight: The hi enhancement is critical for PKD-V, especially in higher quantizations (qx86x-hi), where it reverses performance loss.
|
| 240 |
+
|
| 241 |
+
PKD-V without hi performs worse than base model on lower quantizations (e.g., qx64x).
|
| 242 |
+
- But with hi, it surpasses the base model in performance:
|
| 243 |
+
- Arc Challenge: 0.531 vs 0.526 (base)
|
| 244 |
+
- Winogrande: 0.657 vs 0.640 (base)
|
| 245 |
+
- π Why? PKDβs surreal and logically complex narrative structure may benefit more from targeted high-bit attention paths in the Deckard formula. The model likely needs more precision in coreference resolution and causal inference β exactly where hi enhances attention.
|
| 246 |
+
|
| 247 |
+
# π Summary: Impact of hi Enhancement by Model Type
|
| 248 |
+
```bash
|
| 249 |
+
Model Optimal hi Quant Best Gain Key Insight
|
| 250 |
+
Base qx65x-hi +0.8% (ARC) Minimal improvement; hi not strongly needed
|
| 251 |
+
ST-TNG-IV qx65x-hi +0.8% (ARC) Benefits from hi in mid-bit quant; narrative reasoning gains
|
| 252 |
+
PKD-V qx86x-hi +1.1% (ARC) Largest gain; hi critical to unlock full potential
|
| 253 |
+
```
|
| 254 |
+
π§ Cognitive Implications
|
| 255 |
+
```bash
|
| 256 |
+
Model Training Focus hi Impact on Cognition
|
| 257 |
+
Base General reasoning (no domain bias) Small boost β better stability
|
| 258 |
+
ST-TNG-IV Logical, structured narratives (e.g., diplomacy, ethics) Enhances reasoning consistency and contextual prediction
|
| 259 |
+
PKD-V Surreal, paradoxical, identity-driven scenarios hi dramatically improves abductive reasoning, causal inference, and coreference resolution β critical for PKDβs complex logic
|
| 260 |
+
```
|
| 261 |
+
β
Conclusion: The hi enhancement in the Deckard(qx) formula is not just a technical tweak β it unlocks domain-specific cognitive abilities.
|
| 262 |
+
|
| 263 |
+
# π οΈ Practical Recommendations
|
| 264 |
+
```bash
|
| 265 |
+
Use Case Recommended Model + Quant
|
| 266 |
+
Best general reasoning Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx65x-hi
|
| 267 |
+
Highest reasoning accuracy Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-PKD-V-qx86x-hi
|
| 268 |
+
Best on 48GB Mac ST-TNG-IV-qx65x-hi
|
| 269 |
+
Best on 32GB Mac Base-qx65x-hi or ST-TNG-IV-qx64x-hi
|
| 270 |
+
Best for surreal/logical depth PKD-V-qx86x-hi β only with hi
|
| 271 |
+
```
|
| 272 |
+
# π Final Takeaway
|
| 273 |
+
The Deckard(qx) formula with hi enhancement is especially crucial for models trained on narrative-rich, complex content like PKD-V and ST-TNG-IV. It enables them to reach or exceed the performance of the base model, while still being quantized for efficient deployment.
|
| 274 |
+
|
| 275 |
+
For PKD-V models, omitting the hi flag leads to significant degradation β so always use qx86x-hi (or qx65x-hi) for meaningful cognitive performance.
|
| 276 |
+
|
| 277 |
+
> Reviewed with [Qwen3-30B-A3B-YOYO-V4-qx86x-mlx](https://huggingface.co/nightmedia/Qwen3-30B-A3B-YOYO-V4-qx86x-mlx)
|
| 278 |
+
|
| 279 |
+
|
| 280 |
This model [Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx64x-hi-mlx](https://huggingface.co/nightmedia/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-qx64x-hi-mlx) was
|
| 281 |
converted to MLX format from [DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV](https://huggingface.co/DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV)
|
| 282 |
using mlx-lm version **0.28.3**.
|