[Convert to GGUF format]

#1
by chieunq - opened

Thank you for the great work!
I've just fine-tuned the Qwen2.5-Omni-7B model using LoRA, and then merged the adapters into the base model. Now, I’d like to convert this merged model to the GGUF format to run it with llama.cpp.
Could you please guide me on how to do this?

Thanks in advance!

Sign up or log in to comment