How to extract loras properly ? .weight_scale etc

#97
by siraxe - opened

Just curious what are main rules ? Seeing bunch of these:
unet unexpected: ['blocks.0.self_attn.q.weight_scale',
Guess there's some naming mismatch between block names
msedge_GOIEEL7H3w

Owner

That warning is because you are using fp8_scaled model and the scale weights aren't something that can be extracted. I'm unsure if using fp8_scaled even works for LoRA extraction, never tried, in general it would be best to use higher precision if possible.

If it still works as a LoRA then it should be fine despite the "unet unexpected" messages.

That warning is because you are using fp8_scaled model and the scale weights aren't something that can be extracted. I'm unsure if using fp8_scaled even works for LoRA extraction, never tried, in general it would be best to use higher precision if possible.

If it still works as a LoRA then it should be fine despite the "unet unexpected" messages.

Great that worked , ty 😀
Out of curiosity tried extracting rank_128_fp32 lightx2v i2v low lora , didn't see much difference , not worth. It just takes up 2.5gb.

Maybe unrelated , I saw people remapping multiple loras to difference range (like -1.0 --- 1.0 , or 1.0 --- 2.0 ) do you maybe know how that is done and could Lora extract could also be used with some extra option for that ?
(base model + A) - (base model + B) = C or some other way with scaling something ?

Owner

That warning is because you are using fp8_scaled model and the scale weights aren't something that can be extracted. I'm unsure if using fp8_scaled even works for LoRA extraction, never tried, in general it would be best to use higher precision if possible.

If it still works as a LoRA then it should be fine despite the "unet unexpected" messages.

Great that worked , ty 😀
Out of curiosity tried extracting rank_128_fp32 lightx2v i2v low lora , didn't see much difference , not worth. It just takes up 2.5gb.

Maybe unrelated , I saw people remapping multiple loras to difference range (like -1.0 --- 1.0 , or 1.0 --- 2.0 ) do you maybe know how that is done and could Lora extract could also be used with some extra option for that ?
(base model + A) - (base model + B) = C or some other way with scaling something ?

You would apply the LoRAs like normal in comfy and then extract from that.

Sign up or log in to comment