Barcenas Mixtral 8x7b based on argilla/notux-8x7b-v1

It is a 4-bit version of this model to make it more accessible to users

Trained with DPO and using MoE Technology makes it a powerful and innovative model.

Made with 鉂わ笍 in Guadalupe, Nuevo Leon, Mexico 馃嚥馃嚱

Downloads last month
11
Safetensors
Model size
24.2B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support