It's less spicy than the usual r/StableDiffusion slop, but it's just too cringe not to post.

  • Belly_Beanis [he/him]
    ·
    2 months ago

    Tweening right now is already finicky and it'd be nice to have tools to make it better. I think what I've seen the most of is iterations of the entire image, then linked together. So instead of rendering just a hand or mouth moving, the software generates an entirely new image similar to the previous frame. Incredibly inefficient way of doing animation.

    I've wanted to do something like upload everything I've ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn't something I'm interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won't see a dime and the other user will be paying for something I'd give them for free.

    Whole thing is a fuck.

    • KobaCumTribute [she/her]
      hexagon
      ·
      2 months ago

      I've wanted to do something like upload everything I've ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn't something I'm interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won't see a dime and the other user will be paying for something I'd give them for free.

      What you'd want to do there is pick an open source model like SDXL or Flux and then train a LORA for it, which depending on your hardware you might be able to do locally in a couple of hours. There's also sites you can pay to do the training for you for a dollar or so, like civitai, and you'd get the safetensor file and then be able to either make it free for download there or keep it private and distribute it however you like instead. With a small enough dataset it's not that long or energy intensive a process and you would retain control of it yourself.

      You would have to tag your images yourself in ways that the machine can process, though, and I don't know anything about that. Some models want keyword salad and others want natural language descriptions, and I couldn't tell you what the best practices for either are.

      That's not to actively encourage going and doing that, of course, I'm just saying it's more accessible and efficient at a hobbyist scale these days than you'd think.