Oh the Ai art generator has no "soul" and it's soy and reddit? This precious art form (illustrating things that other people pay you to, a medium dominated almost entirely by furries, porn, and furry porn) is being destroyed by the evil AI? I'm sorry that the democratization of art creation is so upsetting to you. I've brought dozens of ideas to life by typing words into a prompt and I didn't have to pay someone $300 to do so.
How is this different from something like the printing press? Legitimately asking, because I don't see how AI art is particularly different from other technologically derived means of automation. Yeah, under capitalism it will probably be abused, but what new technology isn't?
When you use a printing press to make duplicates of someone else's work, you don't erase their name and replace it with yours. Perhaps a publishing company says "hey we did the work to make copies of this" which is perfectly acceptable. AI art is literally taking other people's art without permission and smashing it together with zero credit or money going to the original artist.
amalgamating a billion different works of art into something new isn't "stealing" the art and is, in fact, something that you and literally every other artist ever does whether you know it or not, unless you've developed your art entirely cut off from the rest of society
there is no such thing as "an original idea," every idea anyone has ever had has built off of those of someone else
These people that are against it are reactionaries, all socialist literature agrees that this is good and should be held in common for the benefit of all. :shrug-outta-hecks:
they're right to not want it in the hands of corporations and to the benefit of the wealthy at the expense of artists but like this is literally just Luddites 2.0
The only real form of art is to smear shit on a cave wall :anprim-pat:
Yes, everyone is inspired by other things when making art. But we bring our own experiences in it and that art evolves.
AI art as it stands today simply takes other people's art and combines them in clever ways. There's no additional layer of experience. There's nothing that evolves the art. It's literally just taking the work of others and claiming it as your own.
This is not really true. This generation of algorithms work by generalizing and condensing ideas into a vector representation, where the similarity between vectors and the dimensions then naturally represent the addition, substraction, and difference of concepts.
As a result, you can quite literally "explain" - or perhaps even make to experience the essence of - concepts to these algorithms that they have never ever encountered, and they can apply them to art.
This is not really different from a human or animal taking inspiration. It's a very similar mechanism, it's just much more primitive. Think of it as a primitive form of intuition.
I think that's a misunderstanding of how the technology works. It's not directly lifting parts of a piece (unless perhaps you tell it directly to do something to a one), it's trying to replicate something similar in combination with a given prompt, no different than if I were to draw in someone's style or take inspiration from their work except for the obvious automation of the task.
Computers cannot take inspiration, claiming it is the same thing is a complete copout.
I know how the technology works. I am a software engineer. I embrace tools that make art more accessible. This isn't making art more accessible, this is a machine very directly taking in other people's art without permission and constructing new art out of the pieces. Machine learning is a false term. There is no learning. It is not discovering new things. It only knows what has been input. There's no higher level.
If original art is no longer being made and shoved into the system, these "AI"s will no longer produce new art.
That's not true. To take Stable Diffusion as an example, it's a mix of two things, a text-to-image model trained on captions of images, and a "noise-denoise" model that takes these cursed, low quality images, compresses them into a "semantic" representation, adds noise, and tries to denoise it.
Then, a text model compresses text into the same kind of semantic representation, and uses it to seed the noise-denoise process.
So, as long as the text model can generalize your prompt effectively, it doesn't need to have seen its meaning before. It can actually figure out things it hasn't seen before by analogy and generalization, albeit not super well. As this generalization and embedding process gets better and better, it will be more and more able to generate things it has never seen before.
Eventually, it will be able to learn fast enough and generalize well enough that you will be able to train it to give words to new concepts merely by explaining them to it and feeding it's result back into itself using arbitrary terms. Then it will be able to produce a fair level of genuinely new things that were only ever explained to it. And eventually if you can give it a way to classify things that are and aren't novel, it will be able to search the embedding space for things that no one has ever thought about.
You can call this not art. But the idea it's forever going to be limited to imitation is just false. It's already beginning to show it can do more than that.
Except you're able to reproduce written works at a much faster rate with a printing press than laboriously writing it on manuscript. Plus, it's extremely trivial to maliciously misattribute the work (you literally just replace the real author's name with someone else). The only way for the real author to fight back is to either partner up with someone who also has a printing press in order to match the production speed of the misattributed work or get the state to shut down the fraud's printing press. Trying to outproduce the printing press by writing it faster or going "uh actually, I wrote that" to everyone you know is an exercise in futility.
Duplicating books wasn't really an art - unless you count stuff like gothic lettering - it was just work that needed to be done, more comparable to artisan labor like furniture making. The books that resulted weren't worse or shallower or lesser for being made by a machine, they were still the exact same words that a human wrote. More importantly, they weren't stealing anything from hand duplicators when they were making their printing press. AI art is based on stealing art that artists made and using it to basically create a frankenartist that can draw whatever they want for free, instead of hiring an actual artist. It's exploitative.
I mean I would count the lettering - calligraphy is a thing, isn't it? There's also the matter of illuminations and other marginalia that wouldn't be replicated via the printing press.
A key point for me is that the AI can't draw "whatever they want". It can follow a prompt, but it's never going to perfectly recreate the idea someone has in their head. Sometimes it's hard to even get something remotely similar to your prompt, much less something that matches up with your vision. That makes art aiming to express something specific or make a point hard to do unless you do post processing.
Also, it's not stealing. The AI aims to make something similar to whatever you've provided it with for inspiration. Sure, you could tell it explicitly to modify a work of art - but at that point what's the difference between that and and doing it yourself? The fact that a robot's taking care of it? Why does the level of individual effort put into it matter?
People don't really buy books for calligraphy, so I'm kind of going to ignore that one.
While it's true that current AI are rather inflexible, this is likely only a temporary situation. Much like how the GPT algorithms went from almost entirely incoherent garbage to being able to write original jokes on occasion, AI art will soon evolve to be able to create things to a high degree of specificity and accuracy.
Calling the training data "inspiration" is being a little generous, I would say, given that the end result is entirely based off taking lots of little details from that art. Copying from many sources is still copying. Whereas real life artists use lots of different factors, like their emotions, their perception of the world, how they feel, and the things they see in real life that aren't artistic works (such as natural beauty, for example), to add their original flavor to an art piece, AI artists base their drawings exclusively on the drawings of others. My opinion is that training with an art piece should be something that requires the rights-holders' consent.
100% this. The machine cannot be inspired. It can merely take its inputs (which were given to it without permission) and mash together combinations.
At least with the one I've used you need to provide it with a prompt, so I don't think that's quite true. I suppose you can debate whether a prompt is valid artistic input, but that's splitting hairs in a way that could start to exclude things like photography, which I don't imagine would get much traction
:michael-laugh: :michael-laugh: :michael-laugh: ah thank god i can stroll into a shop and get the same King Lear as it was printed in 1608 and my copy will be exactly the same as anyone else's!
It isn't different from any other form of technological progress, people are assigning mystical properties to what art is.
With new technologies, the human still creates the art themselves. With digital art, it's still humans making the art. With AI, humans aren't making any of the art at all.
Depends on how you define making or creating - the AI does nothing without a prompt input
deleted by creator
Nooo I wanted to respond you had good points :(
I decided that this was better as a separate comment instead of a reply. It's still in the thread.
oh okay cool :)