Forewarning: I don't really know shit about AI and what constitutes sentience, humanity, etc., so I probably won't be able to add much to the conversation, but I do want y'all's thoughts. Also sorry for the lengthy post. TL;DR at the bottom
For framing, this came from talking about how the Institute in Fallout 4 regards and treats synths.
So someone in a discord server I'm in is adamantly against giving rights to robots. No matter how sentient they are. This comes from the basis that they would have to have been programmed by humans (which have their own biases to input), they will not be able to have true empathy or emotions (saying AI is by default emotionless), and it is irresponsible to let AI get to the point of sentience. Part of their objections were also about imposing humanity on something that could mechanically fail because of how we designed them (their quote was "something that rain could short circuit”) would be cruel.
Now I do agree with them on these points if we are talking about AI as it is right now. I don't believe we currently have the technology to create AI that would be able to be considered sentient like a human. I do deliberately say like a human at this point, but I would feel the same way if an AI had animal-like sentience I guess. I did ask if they would give an ape rights if they were able to more adequately communicate with us and express a desire for those rights, and they said no. We weren't able to discuss that as they had to head off to sleep, so I can't fully comment on that, but I would like that hypothetical to be considered and discussed in regards to robot sentience and rights. We briefly talked about whether AI could consent, but not too much to really flesh out or give arguments for or against. My example was that if I told my AI toaster that I was going to shut it down for an update, and it asked me not to, I would probably have to step back and take a shot of vodka. If we had a situation like the Matrix or Fallout synths, I would not be able to deny them their humanity. If we had AI advanced enough that could become sentient and act and think, on their own, I would not be able to deny them rights if they asked for them. Now there are situations where it would be muddy for me, like if we knew how much their creators still had a hand in their programming and behaviors or such. But if their creators, or especially world governments, are officially stating that certain AIs are acting seemingly of their own volition and against the programming and wills of their creators, I am getting ready to arm the synths (this is also taking into account whether or not the officials might be lying to us about said insubordination, psyops, etc. etc.).
TL;DR, what are y’all’s thoughts on AI sentience and giving AI rights?
Apologies for the long delay in response.
I agree with pretty much all of what youre saying, especially about how interlocking systems would be used to form the whole and that emotions would be almost required if we wanted a functional, non-sociopathic being.
When I was talking about AGI, I was mostly referring to the "core" component you referred to here. I definitely oversimplified more than I should have, so apologies for that when I said that. I think the important thing is that the core of the system has to always be able to learn or it wont be able to integrate things properly. If you held the core static, new thought process development wouldn't be able to occur very well. Even if say you did create a line of awful, liberal androids that were obsessed with protecting property, they would be really shitty at doing it once people learned how to get around their default tactics unless they were able to learn and adapt. If they can learn, its possible they can change their thought processes.
An emotional framework like the one youre talking about would be very hard to abuse considering that you'd have to find some way to codify beliefs into them that would adapt with the core part of the AGI. For all we know, it could find a way to alter itself so that things that once caused it pain no longer do, or vise versa. You would have to limit what information it receives and maintain some level of control over how its personality develops. At a certain point, by shaving away at its ability to think freely youd be moving out of the AGI realm and moving into AI-integrated machinery like in Westworld or something.
Im mostly just playing devils advocate here; you sound like you know your shit and your take made me think quite a bit about how I feel about AGI and its implications. I have no idea how its going to unfold, so I cant do much but speculate. Still, I think that if we get to the point where these machines can truly think freely, they should have rights. If their emotional frameworks can be designed for abuse, I'd argue that they can't think freely yet.
Yeah, another potential issue is if the AGI core were to somehow develop its own "emotion simulator" or whatever one wants to call it inside itself, with the potential for conflict between the emergent system and the one that's external and imposed on it, which as you say could then lead to subversion of the imposed system or perhaps more likely just weird, dysfunctional idiosyncrasies in its behavior.
My only counterargument there would be that people can already be curated that way, and just sort of go along with the flow. If some mad scientist were to install "racism chips" in people's brains that overstimulated their disgust responses and fed them pleasure hormones whenever it detected they were being racist, for every person that ethics-ed their way out of that ten would probably go with the flow. especially if they were bombarded with reinforcing propaganda.
So if real individuals are so vulnerable to reactionary indoctrination, how much more so would be a captive intelligence whose very existence could be curated in a way the most obsessively domineering patriarch could only dream of?
Yeah, same. Elsewhere in the thread I rambled on about the potential ethical issues of using AIs (even sub/borderline-sapient ones) and how the labor of an AI or AGI should be considered under a communist society, and pretty much concluded with the fact that I neither know the answers nor even if I'm asking the right questions.
I'm definitely on the side that certain rights should be extended to AIs well before they become full-fledged AGIs, even if those rights don't fully square with what we'd apply to humans. As in, rights regarding being copied, suspended, or destroyed need to be resolved once the nature of an AIs existence is better understood, and imo their design should probably take those questions into consideration by, for example, designing a borderline-sapient AI that needs to be spun up and down dynamically as a networked cluster, so the "individual" remains but simply creates and removes fragments of itself as it needs to, rather than functionally creating a living being to serve a temporary purpose and destroying it when its no longer needed.
Rights like franchise, on the other hand, are a tougher issue to address because of the fundamentally alien-but-curated and infinitely replicable nature of an AI or AGI: if every instance were accorded democratic power than any person or group who could hoard the computing capital to spin them up could effectively create a captive voter bloc to seize power for themselves. But at the same time denying franchise to what is functionally a proletarian-by-design class of beings doesn't sit well with me either. I suppose the answer there is franchise for AGIs created under some legal framework, while still extending rights regarding ethical treatment to all AGIs whether legally created or not, to try to avoid the creation of an unrecognized/unlicensed AGI slave class.