• clb92@feddit.dk
    ·
    edit-2
    3 days ago

    People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

      • erenkoylu@lemmy.ml
        ·
        edit-2
        2 days ago

        quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

      • clb92@feddit.dk
        ·
        3 days ago

        Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊