:yea:

The context of the reddit thread was discussing how to best make money from AI generators btw

    • laziestflagellant [they/them]
      hexagon
      ·
      2 years ago

      The community as a whole is so fucking hostile towards regular digital and traditional artists, it's fucked up lol

      Ai Techbros: lol artists think they matter anymore, nothing they do is special, cope seethe and mald

      Stable Diffusion 2.0: We removed data from a lot of living artists in our new model [1]

      AI Techbros: NOOOOOOO YOU'VE RUINED IT, YOU'VE TAKEN THE SOUL OUT OF IT, IT'S UNUSABLE NOW!!!

      [1] Not actually, the images are still in the dataset, they're just not linked to the artists' names, so they're still benefiting uncredited from the artists' labor but now it's more downlow and less likely to bring bad press

      • UlyssesT [he/him]
        ·
        2 years ago

        The community as a whole is so fucking hostile towards regular digital and traditional artists, it’s fucked up lol

        I saw that here on Hexbear: "Artists are just lazy and entitled aristocrats, lol!" :so-true:

          • UlyssesT [he/him]
            ·
            2 years ago

            Yes, almost in those exact same words, in particular the part about artists being like aristocrats and being "entitled."

            Judging by my previous exchanges with you, you're probably asking me in bad faith, but I answered you anyway.

              • UlyssesT [he/him]
                ·
                2 years ago

                I don't think I was necessarily referring to your post. The post I was talking about happened weeks ago.

                But, as I suspected, you're already replying to me with a chip on your shoulder and hostile assumptions about what I said, even calling me a liar in a passive-aggressive :reddit-logo: way.

      • drhead [he/him]
        ·
        2 years ago

        Actually, this change wasn't even intentional. It was an indirect result of switching to a different CLIP model (the part that helps convert words to something the model knows) that doesn't have as much in it but is open source.

        The change they did that was intentional, that people complained about, was filtering the dataset for NSFW imagery. Which is a good thing (even if people are going to finetune it back in anyways). But they did it extremely aggressively by flagging anything with a LAION-NSFW score >0.1. You are supposed to use that for >0.99, which covers nearly all NSFW content, beneath that and it's almost all false positives. 0.1 filtered 6% of the dataset, and from my experiences it just doesn't generate pictures of people as well as I remember it doing (all of my Stalin photos look wrong on 2.0) so it may be affecting that. Last I heard they acknowledged that doing >0.1 was a mistake and are doing 0.99 for future models.