Issue using llama.cpp

#7
by Grimble - opened

I am running llama-server and using the Q8_0.gguf and the llama-joycaption-beta-one-llava-mmproj-model-f16.gguf. It loads fine and seems to work initially but I get a lot of browser pop-up errors like "Failed to load image or audio file" and
mtmd_helper_bitmap_init_from_buf: failed to decode image bytes
srv log_server_r: request: POST /v1/chat/completions 127.0.0.1 400
in the terminal. I can't tell if this is a model issue or a llama-server issue. Any help would be appreciated. I'm trying .png images which I'm not sure are supported but also .jpg.

Sign up or log in to comment