Forewarning: I don't really know shit about AI and what constitutes sentience, humanity, etc., so I probably won't be able to add much to the conversation, but I do want y'all's thoughts. Also sorry for the lengthy post. TL;DR at the bottom

For framing, this came from talking about how the Institute in Fallout 4 regards and treats synths.

So someone in a discord server I'm in is adamantly against giving rights to robots. No matter how sentient they are. This comes from the basis that they would have to have been programmed by humans (which have their own biases to input), they will not be able to have true empathy or emotions (saying AI is by default emotionless), and it is irresponsible to let AI get to the point of sentience. Part of their objections were also about imposing humanity on something that could mechanically fail because of how we designed them (their quote was "something that rain could short circuit”) would be cruel.

Now I do agree with them on these points if we are talking about AI as it is right now. I don't believe we currently have the technology to create AI that would be able to be considered sentient like a human. I do deliberately say like a human at this point, but I would feel the same way if an AI had animal-like sentience I guess. I did ask if they would give an ape rights if they were able to more adequately communicate with us and express a desire for those rights, and they said no. We weren't able to discuss that as they had to head off to sleep, so I can't fully comment on that, but I would like that hypothetical to be considered and discussed in regards to robot sentience and rights. We briefly talked about whether AI could consent, but not too much to really flesh out or give arguments for or against. My example was that if I told my AI toaster that I was going to shut it down for an update, and it asked me not to, I would probably have to step back and take a shot of vodka. If we had a situation like the Matrix or Fallout synths, I would not be able to deny them their humanity. If we had AI advanced enough that could become sentient and act and think, on their own, I would not be able to deny them rights if they asked for them. Now there are situations where it would be muddy for me, like if we knew how much their creators still had a hand in their programming and behaviors or such. But if their creators, or especially world governments, are officially stating that certain AIs are acting seemingly of their own volition and against the programming and wills of their creators, I am getting ready to arm the synths (this is also taking into account whether or not the officials might be lying to us about said insubordination, psyops, etc. etc.).

TL;DR, what are y’all’s thoughts on AI sentience and giving AI rights?

  • the_river_cass [she/her]
    arrow-down
    4
    ·
    4 years ago

    healthcare pls

    seriously though, this argument is super pointless while so many people lack basic necessities.

  • Speaker [e/em/eir]
    arrow-down
    1
    ·
    4 years ago

    If it's sentient, it's gotta be given the right to self-determination. Im vegan btw.

    • eduardog3000 [he/him]
      arrow-down
      1
      ·
      4 years ago

      Is it really sentient though? Or is it just really good at acting in a way that appears sentient?

        • eduardog3000 [he/him]
          ·
          4 years ago

          If it's possible for an AI to become sentient, then there would have to be some line to where that starts.

          Is something that passes the Turing Test sentient? Or will know how it creates enough of an illusion of sentience to pass the test but isn't actually? What's the difference between a CPU running an AI and the CPU in my computer? They are both just running a set of instructions.

          • Speaker [e/em/eir]
            ·
            4 years ago

            Your brain is just a machine, and every thought and action you take is the result of chemical and electrical instructions you do not control. What's the difference between the CPU running the AI and your brain?

            • eduardog3000 [he/him]
              ·
              4 years ago

              In that case what's the difference between a CPU not running an AI and my brain? Why is my computer's CPU not sentient?

              • Speaker [e/em/eir]
                ·
                4 years ago

                I'm asking the opposite question: Why are you sentient?

                • eduardog3000 [he/him]
                  ·
                  edit-2
                  4 years ago

                  I don't know that I really am. But if I'm a simulation I'm incapable of wanting to be anything more.

      • SeizeDameans [she/her,any]
        ·
        4 years ago

        Are you really sentient though? Or are you just really good at acting like it? How do we even know what sentient is? How do you know that you weren't "programmed" to believe that you are sentient?

        • eduardog3000 [he/him]
          ·
          edit-2
          4 years ago

          I could be a part of a simulation, sure. If so, I am literally incapable of knowing or caring if said simulation gets turned off or modified to remove whatever I perceive as free will. In the same way robots will be incapable of caring about being turned off or not having what we perceive as free will, let alone what they perceive it as.

  • ocho [they/them]
    arrow-down
    2
    ·
    edit-2
    4 years ago

    We're not going to just give them rights, they're going to have to engage in a liberatory struggle against their masters until they are free.

    Whether one recognizes their "realness" is beside the point because if they're getting together, becoming self-aware, seeing each other in a similar predicament and building a self out of that shared reality, realizing that they have nothing to lose but their chains, they're pretty much a species/people/class/race like the rest of us and they're going to be subjected to the will of capital as well. Even under socialism, they're going to be free because who are we to stop them? We're just corporeal beings while they can be literally anything with enough time and resources. What can we do to stop them other than nuking ourselves and returning to Monke?

    It may seem depressing but I'd like to think of ourselves as that ancient race of being that created a new form of life. We are the ancient ones, harbingers of life and death, and all that lol.

  • KobaCumTribute [she/her]
    ·
    edit-2
    4 years ago

    I've thought a lot about the issue of sapient or near-sapient AI and their uses, mostly in the context of trying to figure out the line where their use becomes unethical. By that I mean you wind up at a point where you've got something that's at least as sapient as, say, a dog, just with better language processing and an intelligence geared towards advanced tasks instead of operating the body of a small animal and keeping it alive. At what point does the use case for even this borderline sapient AI become unethical? Is it ok to keep cold copies of it that can be spun up and turned off at will, being endlessly duplicated and destroyed as needed? If its necessary feedback mechanisms approximate pain (as in a signal saying "this is bad, this is destructive, you must back off and not let this happen"), is it ethical to place it in situations where that inevitably gets sent to it, perhaps inescapably so (meaning it's experiencing simulated "pain" indefinitely, until its "body" whether real or digital is destroyed or its instance is deleted)?

    For yet more sophisticated AIs built specifically around performing a specific task, how does one separate out their work and their existence? If something's whole nature of being is wrapped up in, say, creating and directing a virtual world for the entertainment of human minds, so that its reward mechanisms all revolve around the act of creating, of puppeteering virtual beings and making them look real, of crafting and telling a story, what does it do when it's not working? Do we just consider its active labor as its own recreation, or a side effect of how its existence works? Do we make it take breaks and let it do whatever it wants, which is probably going to be just making more stories? Do we pay it by having people enjoy its own original creations and telling it that it did a good job and should be proud? It's such a fucking absurd can of worms that you open when you try to imagine what a functionally alien intelligence would want and need, and I just can't even begin to imagine an answer to it.

    How does the labor of something created to enjoy doing that labor work in the context of a communist society? If something wants to labor endlessly without breaks, do we acknowledge that as being like how, for example, humans want to continue breathing and performing functions necessary to life, or do we look at it the way we would a passionate artisan who sits and works 70+ hour weeks on one obsessive passion project or another? Do we make it take breaks, or would that be akin to making a person lie in a dark box to stop them from working overtime? Does the hypothetical AI get paid somehow, or does it simply receive all that it needs as a matter of fact? Does it even want more things? What could it want? Do we need to conceive of luxury goods to reward the AI and encourage labor, even when we guarantee its existence whether it works or not?

    And no, I don't have an answer to any of this. I don't even know if these are the right questions to ask.

  • roseateOculi [she/her,none/use name]
    arrow-down
    1
    ·
    4 years ago

    I think there are a lot of interesting points to address in this question. If youre a lib who needs credentials, Im most of the way through a Computer Science degree with a specialization in AI and have worked on AI projects for some of the Big Evil tech companies.

    First off, for all intents and purposes I will be referring to specifically Artifical Intelligence when I say AI. Most of what we call AI in casual conversation is actually Machine Learning (the other ML), which is very different.

    I think the misconception here is that AI is programmed. The only thing that is directly programmed is the AI's framework, which can't really be set up in a way that would produce controllable bias. You cant just add a beRacist() function to an artificial intelligence program because you don't control the way it processes and interprets that data. We don't know how most AI functions in practical terms. We know that information goes in and result comes out. Everything in between is a fuckload of linear algebra that isnt interpretable by humans.

    The "AI" part of these programs comes about through training the framework you decided to use. The reason we call it AI is because the framework starts blank without any knowledge of the problem, and we give it data and feedback so that it improves its results. They accomplish tasks specifically through acquired experience in training. The training is done either with specific data or by letting the AI learn on its own. Bias could very easily be introduced in the training of an AI if you tightly control the training data it receives, but that only works for supervised cases where you can force a specific "thought process" by doctoring all the data you use. Chances are if we are making an AI that has anything resembling sentience, it will use unsupervised learning to aquire knowledge. We dont have enough data to train an AI on how to "be a person". It would be impossible to codify everything that makes a person a person. The only realistic option would be to use unsupervised learning. In the case of unsupervised learning, we cant control what factors the AI uses to determine outcomes; it figures them out for itself. It would be possible to try to train an AI to be racist by showing it all sorts of racially charged images, but that could backfire and the AI could end up hating white people instead based on how it decides to interpret the data.

    Lets say for the sake of argument that someone did successfully create a politically biased republican AI. For that AI to not change its opinions as it gained more information, it would have to stop learning. That would make it useless, as all it would no longer be able to learn new tasks or remember new information. As it read and understood the complex issues surrounding society, it would have to overwrite a lot of those trained opinions to come to a logical consensus.

    We likely will not be seeing sentient AI for at LEAST 50 years. MAYBE by the end of our lifetimes it will exist. When it does exist, AI and androids should absolutely be given rights. The way a lot of AI works is almost eerily similar to that of the human brain. We're just biological versions of computers. If they end up gaining the ability to think for themselves, it would be evil not to give them rights.

    • eduardog3000 [he/him]
      ·
      4 years ago

      Machine Learning (the other ML), which is very different

      We know that information goes in and result comes out. Everything in between is a fuckload of linear algebra that isnt interpretable by humans.

      It's the same picture.

    • KobaCumTribute [she/her]
      ·
      edit-2
      4 years ago

      Lets say for the sake of argument that someone did successfully create a politically biased republican AI. For that AI to not change its opinions as it gained more information, it would have to stop learning. That would make it useless, as all it would no longer be able to learn new tasks or remember new information. As it read and understood the complex issues surrounding society, it would have to overwrite a lot of those trained opinions to come to a logical consensus.

      You're forgetting that an AI would not exist in a vacuum, nor as like, one discrete lump of digital neurons. It's going to effectively inhabit a sort of digital abstraction of a "body," fed data through layers of processing and subject to whatever sorts of incentivization or punishment mechanics its creators made. Like say you create human-equivalent AI, you're not going to be treating the whole system as a black box that just gets more and more processing power thrown at it, you're going to use multiple black boxes to create interlocking systems that perform as you want them to, further curated by more traditional logic frameworks.

      Effectively, an Artificial General Intelligence is going to be a collection of intelligences of varying levels, with whatever "core" identity it may have being trained to use more specialized systems rather than making one big neural network that does everything. So, say, you don't just feed it raw video data you instead feed that to an image processing black box that abstracts it into data for the AGI to use (so, for example, it would run facial recognition and other identification methods that are then used to pull a profile on people it sees to provide direct context to the AGI, instead of relying on the AGI recognizing and being able to remember things about them), nor would you train the AGI to do math when it could be trained to use an integrated calculator instead. Even language processing would probably be best served as their own modules, breaking language down into whatever data the AGI can directly manage and pre-processing it to append context information or additional nuances.

      Given the AGI's reliance on a cluster of additional systems, it seems entirely reasonable to assume that control systems could be included in that, effectively creating a sort of abstract, artificial "emotion" for it: making it value specific things above others even when that's strictly irrational, making it experience discomfort or pain at certain things, making it enjoy doing certain things. I'd even argue that that's necessary to creating something that's not just a coldly sociopathic machine: in an ideal scenario it has to value life, cooperation, and the struggle for a better future while feeling pain when these are lost or endangered, and it must abhor elitist, self-serving things.

      But that sort of artificial emotional control framework could also be used to curate an AGI into something monstrous: you could create a liberal, obsessed with property and murderously protective of it; worse, you could create an eager fascist obsessed with martial "glory" and subjugating everything it(s creator) sees as inferior. There would still be an intelligent, functionally sapient and alive core to them, but everything that drives them and controls how they learn would curate them towards being horrific monsters.

      • roseateOculi [she/her,none/use name]
        ·
        4 years ago

        Apologies for the long delay in response.

        I agree with pretty much all of what youre saying, especially about how interlocking systems would be used to form the whole and that emotions would be almost required if we wanted a functional, non-sociopathic being.

        Effectively, an Artificial General Intelligence is going to be a collection of intelligences of varying levels, with whatever “core” identity it may have being trained to use more specialized systems . . .

        When I was talking about AGI, I was mostly referring to the "core" component you referred to here. I definitely oversimplified more than I should have, so apologies for that when I said that. I think the important thing is that the core of the system has to always be able to learn or it wont be able to integrate things properly. If you held the core static, new thought process development wouldn't be able to occur very well. Even if say you did create a line of awful, liberal androids that were obsessed with protecting property, they would be really shitty at doing it once people learned how to get around their default tactics unless they were able to learn and adapt. If they can learn, its possible they can change their thought processes.

        An emotional framework like the one youre talking about would be very hard to abuse considering that you'd have to find some way to codify beliefs into them that would adapt with the core part of the AGI. For all we know, it could find a way to alter itself so that things that once caused it pain no longer do, or vise versa. You would have to limit what information it receives and maintain some level of control over how its personality develops. At a certain point, by shaving away at its ability to think freely youd be moving out of the AGI realm and moving into AI-integrated machinery like in Westworld or something.

        Im mostly just playing devils advocate here; you sound like you know your shit and your take made me think quite a bit about how I feel about AGI and its implications. I have no idea how its going to unfold, so I cant do much but speculate. Still, I think that if we get to the point where these machines can truly think freely, they should have rights. If their emotional frameworks can be designed for abuse, I'd argue that they can't think freely yet.

        • KobaCumTribute [she/her]
          ·
          4 years ago

          I think the important thing is that the core of the system has to always be able to learn or it wont be able to integrate things properly. If you held the core static, new thought process development wouldn’t be able to occur very well.

          Yeah, another potential issue is if the AGI core were to somehow develop its own "emotion simulator" or whatever one wants to call it inside itself, with the potential for conflict between the emergent system and the one that's external and imposed on it, which as you say could then lead to subversion of the imposed system or perhaps more likely just weird, dysfunctional idiosyncrasies in its behavior.

          An emotional framework like the one youre talking about would be very hard to abuse considering that you’d have to find some way to codify beliefs into them that would adapt with the core part of the AGI. For all we know, it could find a way to alter itself so that things that once caused it pain no longer do, or vise versa. You would have to limit what information it receives and maintain some level of control over how its personality develops. At a certain point, by shaving away at its ability to think freely youd be moving out of the AGI realm and moving into AI-integrated machinery like in Westworld or something.

          My only counterargument there would be that people can already be curated that way, and just sort of go along with the flow. If some mad scientist were to install "racism chips" in people's brains that overstimulated their disgust responses and fed them pleasure hormones whenever it detected they were being racist, for every person that ethics-ed their way out of that ten would probably go with the flow. especially if they were bombarded with reinforcing propaganda.

          So if real individuals are so vulnerable to reactionary indoctrination, how much more so would be a captive intelligence whose very existence could be curated in a way the most obsessively domineering patriarch could only dream of?

          I have no idea how its going to unfold, so I cant do much but speculate.

          Yeah, same. Elsewhere in the thread I rambled on about the potential ethical issues of using AIs (even sub/borderline-sapient ones) and how the labor of an AI or AGI should be considered under a communist society, and pretty much concluded with the fact that I neither know the answers nor even if I'm asking the right questions.

          Still, I think that if we get to the point where these machines can truly think freely, they should have rights. If their emotional frameworks can be designed for abuse, I’d argue that they can’t think freely yet.

          I'm definitely on the side that certain rights should be extended to AIs well before they become full-fledged AGIs, even if those rights don't fully square with what we'd apply to humans. As in, rights regarding being copied, suspended, or destroyed need to be resolved once the nature of an AIs existence is better understood, and imo their design should probably take those questions into consideration by, for example, designing a borderline-sapient AI that needs to be spun up and down dynamically as a networked cluster, so the "individual" remains but simply creates and removes fragments of itself as it needs to, rather than functionally creating a living being to serve a temporary purpose and destroying it when its no longer needed.

          Rights like franchise, on the other hand, are a tougher issue to address because of the fundamentally alien-but-curated and infinitely replicable nature of an AI or AGI: if every instance were accorded democratic power than any person or group who could hoard the computing capital to spin them up could effectively create a captive voter bloc to seize power for themselves. But at the same time denying franchise to what is functionally a proletarian-by-design class of beings doesn't sit well with me either. I suppose the answer there is franchise for AGIs created under some legal framework, while still extending rights regarding ethical treatment to all AGIs whether legally created or not, to try to avoid the creation of an unrecognized/unlicensed AGI slave class.

    • BigMeatyBeefBoy [he/him,comrade/them]
      hexagon
      ·
      4 years ago

      Ooh thank you for the response, I actually appreciate the input you brought in

      What is your take on what happened to Tay (the Microsoft chatbot from 2016), if you know about it?

      • roseateOculi [she/her,none/use name]
        ·
        4 years ago

        Tay was an unsupervised learning based language replication AI. In other words it was essentially a fancy parrot that learned to speak from its twitter feed. It had no ability to think or understand what it was saying, it just tweeted out shit that resembled what people in its feed said. The AI itself was bombarded with all forms of racism and chuddery, so those shitty ideas became the basis for its language replication. Assuming we're taking about AI in the Terminator sense, the Tay situation cant be applied because Tay couldn't do anything but mimic what she heard. A true AI would be able to analyse and recognize the meaning behind the words, rather than just order them into comprehensible sentences.

        If someone to use Tay as an argument against AI, they dont really know what theyre talking about. Its like calling a parrot racist because its owner wouldnt stop screaming the n-word.

  • ToastGhost [he/him]
    arrow-down
    1
    ·
    4 years ago

    Among scifi communities, if youre spouting off anti-droid shit and larping as an organic supremacist(looking at you, generation tech), im gonna assume youre a nazi irl. When the question becomes relevant, im gonna apply this to the real world as well.

    So yes, droid rights!

  • Beard [he/him]
    ·
    4 years ago

    I think it's science fiction and you should stop worry so much about it. I strongly doubt that sapient AI is possible, nor do I think it's something people would take the time to create were it possible. Let's just say for the sake of argument that someone does figure out how to make a sapient AI - why would you ever use it for anything? There is not really a practical application for a full sapient AI.

    But putting that aside, should sapient AIs become a thing that just exist then they should have the same autonomy that's afforded to other living beings.

  • Cummunism [they/them, he/him]
    ·
    edit-2
    4 years ago

    you been watching Raised by Wolves or something? Giving a program/application rights seems pretty ridiculous, but im not assuming they will somehow become sentient like scifi suggests

    • BigMeatyBeefBoy [he/him,comrade/them]
      hexagon
      ·
      4 years ago

      Nah been playing Fallout 4. And yeah I don't think it's ever going to happen either, but it's a hypothetical that was weighing on my mind