his is a MXFP4_MOE quantization of the model Huihui-MoE-23B-A4B-abliterated

Model quantized with F16 GGUF’s from: https://huggingface.co/DevQuasar/huihui-ai.Huihui-MoE-23B-A4B-abliterated-GGUF

Original model: https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated

Downloads last month
173
GGUF
Model size
23B params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Huihui-MoE-23B-A4B-abliterated-MXFP4_MOE-GGUF

Collection including noctrex/Huihui-MoE-23B-A4B-abliterated-MXFP4_MOE-GGUF