QwQ-32B-Preview LoRA for separating thinking/answer parts

This LoRA file was fine-tuned to make QwQ constantly separate its private thoughts from the final answer using <THINKING>...</THINKING><ANSWER>...</ANSWER> tags.

A Q4_K_M GGUF version (which can be used as an adapter for Ollama) is available on shakedzy/QwQ-32B-Preview-with-Tags-LoRA-GGUF.

Downloads last month
0
Safetensors
Model size
18.9B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for shakedzy/QwQ-32b-Preview-bnb-4bit-wTags

Base model

Qwen/Qwen2.5-32B
Adapter
(24)
this model