• FunkyStuff [he/him]
    ·
    1 年前

    Making art easier by allowing an artist to get the thing they actually wanna make a little bit more practical to realize = halal.

    Making "art" easier by making a stochastic model that just copy pastes training data from other artists to make a crude representation of what the user wrote into a text box = haram.

    • WoofWoof91 [comrade/them]
      ·
      1 年前

      i use stable diffusion to generate npc images in my ttrpgs
      it's handy to be able to churn out a bunch of decent looking tokens without having to trawl the internet for ages

      • FunkyStuff [he/him]
        ·
        1 年前

        Yeah, I agree, it's kind of a blurry line. If someone draws something and uses AI to enhance it it's not the end of the world, and I think it's still art unless the "enhancement" is totally replacing big parts, or all, of the input. Otherwise, it's no different than any other tool that has made art easier to make.

        But I think in most cases generative AI can't make anything that could reasonably be considered art, because the substance they're taking from to make the output isn't even the user's. It's nothing more than a very advanced plagiarism machine where your prompt tells it which works to plagiarize from.

        • Tak@lemmy.ml
          ·
          edit-2
          1 年前

          It’s nothing more than a very advanced plagiarism machine where your prompt tells it which works to plagiarize from.

          Probably an unpopular opinion but I disagree. People have been learning from other artists for so long that I want to say art is iterative and not just transformative. Just because an art style is copied doesn't make it less art in my opinion. As soon as you create art you are creating a template to be copied and to be iterated upon. It's why we have genres in art, it's why so many songs use the same chords, and why art progresses.

          In my opinion AI doesn't create art not because it copies but because it doesn't understand what it is making. But if you were to use samples from AI work to piece together something it could have that understanding and it could be art. The same way a photographer might create art of a landscape they didn't create but they didn't just copy it. I wouldn't doubt if 200 years ago there were artists who accused the camera of just copying art but I think we can look at pictures that we can see as art and others that aren't.

          • FunkyStuff [he/him]
            ·
            1 年前

            I think the photographer example you put does touch upon an interesting point since there really were people who ridiculed photography as not art. And honestly the criteria I had said kinda would disqualify photography, which is unfair.

            Would AI be able to create art if it really did understand how the pieces it's putting together are part of what the user wants? I think it might be an useless question, because skeptics (like me) can keep shifting the goalposts of what understanding really means. So it's unfalsifiable in a way. Some techbros claim AI can understand it because they are capable of minimizing a loss function. But I'm not satisfied by that because it amounts to making the claim that if a system performs a task well, the system has the property of having a cognitive understanding of the task. It's a non sequitur, and I've seen AI enthusiasts make the same form of non sequitur a thousand times.

            Maybe the conclusion we can draw from it is that trying to define what exactly is and isn't art is hard, but clearly, the OP is not.

            • Tak@lemmy.ml
              ·
              1 年前

              It is incredibly difficult and even among art made by people there are some who would say it isn't art. Honestly I feel like art is only art when the audience can understand how it was made at least roughly. Personally I think that art can be more than nonsensical that it can have purpose like a smart phone can be art. People will disassemble and hang smartphones or electronics because they see beauty in collection of components.

              I don't know enough to say what is or isn't art only that I have an opinion of what I see as art. I don't think that the OP posted art not because a machine made it but because it looks wrong to me. I'm sure there are some who might see it as art and I think we're allowed to disagree. But here we are with cameras on our person that most of humanity would have killed for and we use it to take shitty selfies of ourselves and the food we're eating. The tool can not make art 99.99% of the time but still be capable of making art.

    • novibe@lemmy.ml
      ·
      edit-2
      1 年前

      I would agree with you, if that was at all how the AIs generate images.

      They don’t “copy and paste” anything. The images they make are novel. The AI is only trained on other images. It doesn’t have access to them to copy them once the training ends.

      The way the AI generates new images is really similar to humans. It goes over its references and literally creates a brand new image.

      Now, just like a person, you can ask it to make something as an exact copy of something that exists. And it can do it like a human, through “technique” and references. But it’s not copying directly, it’s making a new image that is like the one you asked it to copy.

      I really wish people would realise this. Idk why the idea image generating AI is “copying” from a database of images is so prevalent…

      The database of images is literally only used during training. Once the AI is set the database doesn’t exist to it anymore.

      The difference between an artist who studied their whole life, seeing paintings, seeing references, going to classes, to then create new images from their own mind -> to one that traces images from google.

      AI currently does the first, not the latter.

      • FunkyStuff [he/him]
        ·
        edit-2
        1 年前

        Look, I know how deep learning works. I know it doesn't literally copy the images from the training dataset. But the entire point of supervised learning is to burn information about the training data into the weights and biases of a neural network in such a way that it generalizes over some domain, and can correlate the desired inputs with the desired outputs. Just because you're using stochastic methods to indirectly reproduce the training data (of course, in a way that's invisible to humans because of the nature of deep neural networks), doesn't suddenly erase the fact that the only substance an AI has to draw from is the training data itself.

        I think it's really oversimplifying how humans make art to say that it's just going over references and creating something new from it. As humans, we are influenced by the work we've seen, but because of our unique experience we inject something completely new into any art we make, no matter how derivative. An AI is incapable of doing the same (except for some random noise), because literally all it's capable of doing is composing together information that has been baked into its weights and biases. It's not like when you ask a generative AI to make something for you, it will decide to get funky with it. All it's doing is drawing from the information that has been baked into it.

        Just like how ChatGPT doesn't actually understand what it's saying because it's only capable of predicting statistical relationships between words one word at a time, and has no model of meaning, only of how words go together in the training data, AI that generates images doesn't actually know what it's making or why. That is totally different from humans who make a piece of art step by step and do so very deliberately.

        Edit: I recommend you watch this video by an astrophysicist who works with machine learning regularly, she makes my point a lot better than I can. https://youtu.be/EUrOxh_0leE

        • novibe@lemmy.ml
          ·
          edit-2
          1 年前

          How would you classify those “experiences” people have that influence their art or work other than data? Honest question.

          And very interesting video. I still don’t 100% align with this perspective, cause I feel it tries to give something extra to the brain than materiality. While I’m no material reductionist, I don’t think our human creativity is “special” or “metaphysical”. It’s our brain, and it’s physical. It can be physically replicated.

          I think AI will have a “soul” or consciousness because I think everything already has it. It’s just our human biology that allows this consciousness to be self-experiential and experience other things, such as thoughts and ideas and feelings. A rock doesn’t have those, but it has a “soul” or consciousness. But I feel I digressed a lot lol

          Also to make it clear, I don’t think AI exists already. I think these models and developments we have are part of AI though.

          • FunkyStuff [he/him]
            ·
            1 年前

            I don't disagree that experiences are data. The major distinction I'm making is that the human creative process uses more than just data, we have intention, aesthetics, we make mistakes, change our minds, iterate, etc. For a generative AI, the "creative process" is tokenizing a string, running the tokens through an attention matrix, plugging that into a thousand different matrices that then go into a post processing layer and they spit out an image. At no point does it look at what it's doing and evaluate how it's gonna fit into the final picture.

            As for the rest of your reasoning, I neither agree nor disagree, I think we just don't have the same definition of consciousness.

            • novibe@lemmy.ml
              ·
              edit-2
              1 年前

              I feel your description of what a generative AI does is pretty reductive. The middle part of “plugging the ‘token’ through thousands of different matrices” is not at all well understood. We don’t know how the AI generates the images or text. It can’t explain itself.

              And we have ample research showing these models have internal models of the world and can have “thoughts”.

              In any case, what would you say consciousness is? This is a more interesting question to me tbh.

              • FunkyStuff [he/him]
                ·
                edit-2
                1 年前

                Well I don't see the problem, AI can't explain itself but it's nothing more than matrix multiplication with a nonlinearity. Maybe you use a Fourier transform and a kernel instead of scalar weights for a convolutional neural network, maybe it has state instead of being purely feed forward, but at the core of it all you're doing is multiplying matrices and applying a nonlinearity. I don't know what you mean that we don't know how it generates images and text. It's literally just doing the thing it was programmed to do?

                What research? I'd like to see some evidence that these models "think," given that the way every LLM I know of works is by generating a single word at a time. When you ask a GPT how to bake bread, and the first word it outputs is "Surely!" it has no clue what explanation it'll start giving you. In fact, whether or not it chooses the exact word "Surely!" as the start of the response has a cascading response on the rest of the output. Then, as I had said earlier, LLMs don't see anything more than the statistical correlations between words. No LLM knows what gravity is, but when you ask it why things fall down it has enough physics textbooks in its training data that it can parrot the answer from there.

                One of the ways I really broke down the idea that GPTs have any model of thought is playing this game. If AI had any actual model of meaning, it would understand security and it would understand not to just tell the player the password. Instead, it will literally blurt it out if you do as much as ask it for words that rhyme. You don't even need to mention "password," the way GPT works means that if it detects a lot of weight on a certain word in its previous prompt (which naturally would've emphasized the password), it's almost guaranteed to bring it up again. I know it's not exactly a hard proof, but it is fun.

                As for your last question you're out of luck because I'm actually just a Catholic lol, not a lot more to say than I believe that there is a metaphysical nature to human experience connecting us to a soul. But that's a completely unscientific belief to be honest, and it's not a point I can argue because it's not based on evidence.

                • novibe@lemmy.ml
                  ·
                  edit-2
                  1 年前

                  It’s not true to say that LLMs just do as they are programmed. That’s not how machine and deep learning work. The programming goes into making it able to learn and parse through data. The results are filtered and weighted, but they are not the result of the programming, they are the result of the training.

                  Y’know, like our brain was programmed by natural selection and the laws of biology to learn and use certain tools (eyes, touch, thoughts etc.) and with “training data” (learning or lived experience) it outputs certain results which are then filtered and weighted (by parents, school, society)….

                  I think LLMs and diffusors will be a part of the AI mind, generating thoughts like our mind does.

                  Regarding the last part, do you think the brain or the mind create or are a part of the soul?

                  I think discussing consciousness is very scientific. To think there’s no point in doing so is reductionist to materiality, which is unscientific. Unfortunately many people, even scientists, are more scientificists than actually scientific.

                  • FunkyStuff [he/him]
                    ·
                    1 年前

                    I don't know how much you know about computer science and coding, but if you know how to program in Python and have some familiarity with NumPy, you can make your own feed forward neural network from scratch in an afternoon. You can make an AI that plays tic tac toe and train it against itself adversarially. It's a fun project. What I mean by this is to say, yes they do, LLMs and generative models do as they are programmed. They are no different than a spreadsheet program. The thing that makes them special is the weights and biases that were baked into them by going through countless terabytes of training data, as you correctly state. But it's not like AI have a secret, arcane mathematical operation that no computer scientist understands. What we don't understand about them is why they activate the way they do; we don't really know why any given part of the network gets activated, which makes sense because of the stochastic nature of deep learning: it's all just convergence on a "pretty good" result after getting put through millions of random examples.

                    I think the mind and consciousness are separate from the soul that precedes their thoughts. But, again, I have absolutely no evidence for that. It's just dogma.