Please update mmproj metadata to clip-vision
https://github.com/city96/ComfyUI-GGUF/pull/349
Cause it gives loading errors in different architectures, with HF GGUF edit its done in just few seconds.
mmproj, it comes from this part:
correct loaded:
general.architecture clip
general.type clip-vision
false loaded:
general.architecture clip
general.type mmproj
i get the error in the abliberated model:
https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-GGUF/blob/main/Qwen2.5-VL-7B-Instruct-abliterated.mmproj-Q8_0.gguf
but i edit the metadata with huggingface and added the files to my repo so for this the patch is no longer needed:
https://huggingface.co/Phil2Sat/Qwen-Image-Edit-Rapid-AIO-GGUF/blob/main/Qwen2.5-VL-7B-Instruct-abliterated/Qwen2.5-VL-7B-Instruct-abliterated.mmproj-Q8_0.gguf
i dont know what is the correct naming but this model was the only case where i had to patch something.
i guess the best way, catch error and give a message that the repo maintainer should update the metadata.
I think it is ComfyUI-GGUF's responsibility to not break compatibility with older models. Sure, we could change or requant them but there are so many tools supporting GGUFs that we should not change our quants just because one niche tool has a compatibility issue with some of them. As long as llama.cpp can load them then 3rd party tools should be able to do so as well, or it is on them to make their tool work.
Luckily in this case ComfyUI-GGUF seems to already have fixed this issue in their latest master 3 hours ago: https://github.com/city96/ComfyUI-GGUF/commit/d4fbdb01390230d02d4b33c9b4ad721321f22d67
Please update ComfyUI-GGUF and retest and it will probably all work perfectly fine now without us having to change anything.