It's less spicy than the usual r/StableDiffusion slop, but it's just too cringe not to post.

  • UlyssesT
    ·
    edit-2
    10 days ago

    deleted by creator

    • KobaCumTribute [she/her]
      hexagon
      ·
      2 months ago

      unchecked expansion of this LLM shit

      Just to be clear, LLMs are something different even if both sorts of models use tensor math. This is some hobbyist with awful taste (like 95+% of Stable Diffusion/Flux hobbyists) making a Flux LORA on consumer hardware with typical gaming power consumption - he's a dork doing something cringe, but it's about as innocuous as if he'd instead spent the time playing some GPU intensive game with the settings cranked up. It's the giant chatbot training datacenters that are burning massive amounts of energy in the hopes that naive text prediction will somehow become smart if you just make it big enough and train it long it enough, and then trying to deploy those dogshit chatbots everywhere they possibly can even though they still suck horribly.

      • UlyssesT
        ·
        edit-2
        10 days ago

        deleted by creator

        • KobaCumTribute [she/her]
          hexagon
          ·
          2 months ago

          it's fine as a hobby gimmick, I suppose.

          I do want to clarify that my position is still that 90+% of Stable Diffusion/Flux hobbyists should be pikmin-carry-lwojak-nooopikmin-carry-r barbara-pit because the scene is overrun with chuds, grifters, pedophiles, and people who are just too cringe. But the tech itself is cool and relatively innocuous at a hobbyist scale.

          It's still a problem on the corpo scale that is stoking and profiting from the hype wave.

          Yeah, I think it's basically a problem of scale and induced demand: one image generated with a hobbyist setup takes a few seconds to a few minutes depending on the model and GPU, is pretty comparable in energy usage to using that machine for more mundane rendering tasks for the same amount of time, and is probably not meaningfully distinct from how much energy would be used over the hours that making it with more traditional digital methods would use; the problem is that having such a fast and convenient way of producing images induces a demand for more of them so it's not just one image it's dozens or hundreds or thousands of them all but one or two of which will get thrown out. At the corporate scale it's technically more efficient per image, but it scales even further and tries to draw in more people so now it's many millions upon millions of images and because it's an uncontrollable remote server instance instead of the comparatively sophisticated tools that a local machine can run every single one of those images is useless noise.

          I hate the corporate shit so much. It's just all bad, all the time, at a huge cost, with no redeeming qualities whatsoever. At least with the hobbyist stuff there's at least something interesting and potentially useful to it among all the bad.

          • UlyssesT
            ·
            edit-2
            10 days ago

            deleted by creator

            • KobaCumTribute [she/her]
              hexagon
              ·
              2 months ago

              Yeah, like just looking at this in a vacuum the tech as it stands now could probably let a team of animators eschew the need to go and contract out other studios to do a bunch of extra grunt work like hand interpolating between keyframes, etc. In a better system that would be amazing because it would mean that artists could produce things without the need to subordinate so many others to their vision and without needing the sorts of institutional backing necessary to get all those extra hands involved, and that artists wouldn't get stuck doing thankless grunt work for someone else like they do now.

              But instead it's used as a glorified gacha pull system for the worst people alive just hitting the treat button over and over, and when it does see corporate animation use it'll be used to cut costs and pad exec salaries and investor profits instead of being used to pay artists better or allow artist-led projects to become more viable and prevalent. And that's without getting into Hollywood's interest in using it to make even shittier post production CGI effects for their ever worsening slop.

              • Belly_Beanis [he/him]
                ·
                2 months ago

                Tweening right now is already finicky and it'd be nice to have tools to make it better. I think what I've seen the most of is iterations of the entire image, then linked together. So instead of rendering just a hand or mouth moving, the software generates an entirely new image similar to the previous frame. Incredibly inefficient way of doing animation.

                I've wanted to do something like upload everything I've ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn't something I'm interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won't see a dime and the other user will be paying for something I'd give them for free.

                Whole thing is a fuck.

                • KobaCumTribute [she/her]
                  hexagon
                  ·
                  2 months ago

                  I've wanted to do something like upload everything I've ever drawn and then train an AI to replicate my own technique. But the ethics behind setting a car on fire to save me 30~60 minutes of work isn't something I'm interested in. Not to mention all the issues with copyright. Immediately my work will be paywalled. I won't see a dime and the other user will be paying for something I'd give them for free.

                  What you'd want to do there is pick an open source model like SDXL or Flux and then train a LORA for it, which depending on your hardware you might be able to do locally in a couple of hours. There's also sites you can pay to do the training for you for a dollar or so, like civitai, and you'd get the safetensor file and then be able to either make it free for download there or keep it private and distribute it however you like instead. With a small enough dataset it's not that long or energy intensive a process and you would retain control of it yourself.

                  You would have to tag your images yourself in ways that the machine can process, though, and I don't know anything about that. Some models want keyword salad and others want natural language descriptions, and I couldn't tell you what the best practices for either are.

                  That's not to actively encourage going and doing that, of course, I'm just saying it's more accessible and efficient at a hobbyist scale these days than you'd think.

              • UlyssesT
                ·
                edit-2
                10 days ago

                deleted by creator

    • ExotiqueMatter@lemmygrad.ml
      ·
      2 months ago

      so-true "We just have to sit back until EnTRepReNeURs solve the climate catastrophe by the magic of green capitalism, and if that doesn't work out we'll just have to start over civilization after 99% of the population dies."

      • UlyssesT
        ·
        edit-2
        10 days ago

        deleted by creator