Image description

Tsunemoto GGUF's of mistral-ft-optimized-1218

This is a GGUF quantization of mistral-ft-optimized-1218.

Original Repo Link:

Original Repository

Original Model Card:


This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process here.

Downloads last month
25
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support