LoRA Training and fine-tuning?
When will we get offical LoRA training or fine-tuning code?
I have LoRA training ready to go in AI Toolkit. Just waiting on the base weights to be released. Training on the turbo model directly quickly breaks down the distillation, as expected.
I have LoRA training ready to go in AI Toolkit. Just waiting on the base weights to be released. Training on the turbo model directly quickly breaks down the distillation, as expected.
Where can i find info about base z image model?
I have LoRA training ready to go in AI Toolkit. Just waiting on the base weights to be released. Training on the turbo model directly quickly breaks down the distillation, as expected.
Absolute FKN legend, the GOAT of the train game! - On every ground breaking model in a heartbeat, your importance to the community is so greatly appreciated!
I found a working trainer, but I got OOM after a few steps :(
I found a working trainer, but I got OOM after a few steps :(
What GPU did you use? DreamBooth or LoRA? Anyway it's impossible to do that on my 3060 6GB and I have to rent a server with better GPUs...
I have LoRA training ready to go in AI Toolkit. Just waiting on the base weights to be released. Training on the turbo model directly quickly breaks down the distillation, as expected.
GOAT, I waiting you
Just to be clear this is going to be a fully open weight model we can train? Open source doesn't mean open weights, which is what a good model needs.
AI Toolkit - "Add support for training Z-Image Turbo with a de-distill training adapter"
https://github.com/ostris/ai-toolkit/commit/4e62c38df5eb25dcf6a9ba3011113521f1f20c10
Without open weights, that is just like it was with Flux 1.dev, a hack. Frozen weights is how Flux 1.dev was so it can't actually learn. Frozen weights is like a museum art piece, while open weights it can learn any thing new we train it with.