Oh the Ai art generator has no "soul" and it's soy and reddit? This precious art form (illustrating things that other people pay you to, a medium dominated almost entirely by furries, porn, and furry porn) is being destroyed by the evil AI? I'm sorry that the democratization of art creation is so upsetting to you. I've brought dozens of ideas to life by typing words into a prompt and I didn't have to pay someone $300 to do so.

  • RION [she/her]
    ·
    2 years ago

    I think that's a misunderstanding of how the technology works. It's not directly lifting parts of a piece (unless perhaps you tell it directly to do something to a one), it's trying to replicate something similar in combination with a given prompt, no different than if I were to draw in someone's style or take inspiration from their work except for the obvious automation of the task.

    • macabrett
      ·
      2 years ago

      Computers cannot take inspiration, claiming it is the same thing is a complete copout.

      I know how the technology works. I am a software engineer. I embrace tools that make art more accessible. This isn't making art more accessible, this is a machine very directly taking in other people's art without permission and constructing new art out of the pieces. Machine learning is a false term. There is no learning. It is not discovering new things. It only knows what has been input. There's no higher level.

      If original art is no longer being made and shoved into the system, these "AI"s will no longer produce new art.

      • sysgen [none/use name,they/them]
        ·
        2 years ago

        That's not true. To take Stable Diffusion as an example, it's a mix of two things, a text-to-image model trained on captions of images, and a "noise-denoise" model that takes these cursed, low quality images, compresses them into a "semantic" representation, adds noise, and tries to denoise it.

        Then, a text model compresses text into the same kind of semantic representation, and uses it to seed the noise-denoise process.

        So, as long as the text model can generalize your prompt effectively, it doesn't need to have seen its meaning before. It can actually figure out things it hasn't seen before by analogy and generalization, albeit not super well. As this generalization and embedding process gets better and better, it will be more and more able to generate things it has never seen before.

        Eventually, it will be able to learn fast enough and generalize well enough that you will be able to train it to give words to new concepts merely by explaining them to it and feeding it's result back into itself using arbitrary terms. Then it will be able to produce a fair level of genuinely new things that were only ever explained to it. And eventually if you can give it a way to classify things that are and aren't novel, it will be able to search the embedding space for things that no one has ever thought about.

        You can call this not art. But the idea it's forever going to be limited to imitation is just false. It's already beginning to show it can do more than that.