• Even_Adder@lemmy.dbzer0.com
      hexagon
      ·
      edit-2
      2 months ago

      I don't think so. They're going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.

      • clb92@feddit.dk
        ·
        edit-2
        2 months ago

        People have been training great Flux LoRAs for a while now, haven't they? Is a LoRA not a finetune, or have I misunderstood something?

          • clb92@feddit.dk
            ·
            2 months ago

            Oh well, in practice I'll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

          • erenkoylu@lemmy.ml
            ·
            edit-2
            2 months ago

            quite the opposite. Lora's are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).

        • Even_Adder@lemmy.dbzer0.com
          hexagon
          ·
          2 months ago

          Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn't work because the models are distilled. You'd have to find a way to undistill them to train them.