Update README.md
#23 opened 6 days ago
by
bullerwins

Add 'transformers' tag
#22 opened 6 days ago
by
betki

Add 'pytorch' tag
#21 opened 6 days ago
by
betki

Immaculate
#20 opened 8 days ago
by
annettedattolo

Max model len is 32768 when serving with vllm and not 40960
2
#19 opened 8 days ago
by
f14
VLLM Reasoning parser
1
#17 opened 12 days ago
by
Rictus
Model always ends generation with \boxed{}
#16 opened 13 days ago
by
cbunivofutah
Model generating non-stop when used in Cline through vLLM
#15 opened 13 days ago
by
mhwang093
output issue
2
#14 opened 13 days ago
by
mobo68

No multimodal :c
π
4
#13 opened 14 days ago
by
nicolollo
Greek Language
#12 opened 14 days ago
by
myrulezzz
Think token
π
3
1
#11 opened 14 days ago
by
iyanello
Error with vLLM docker image
2
#10 opened 14 days ago
by
mhwang093
MMMU-Pro Vision with Magistral Small
π₯
β
2
3
#9 opened 14 days ago
by
tomrance
GGUFS (correct) and BF16 - HF , Transformers , with correct tokenizers / json s
π
β€οΈ
5
1
#8 opened 15 days ago
by
DavidAU

So this is just a SFT "distill" of Magistral-Medium ?
π₯
1
6
#6 opened 15 days ago
by
gghfez
tokenizer
π
1
4
#5 opened 15 days ago
by
ctranslate2-4you
Missing Tokenizer/Processor for use with Transformers
π
1
5
#3 opened 15 days ago
by
mgoin

Cool but where Magistral-Medium-2506 weights ?
π
π
18
2
#2 opened 15 days ago
by
celsowm
