What version of llamacpp that u use to run this model?

#1
by saipangon - opened
DevQuasar org

Works with latest!

have tried to run this model with koboldcpp and it makes terrible response from the first message. only 2 letter response, H F H F H F like that continue. for sure something is wrong and you didn't tested it, isn't? by the way, I have tested the 4bit K.L one.

DevQuasar org

It's working for me with llama.cpp backend:

Screenshot From 2025-10-02 15-14-01

Sign up or log in to comment