https://archive.ph/px0uB
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
https://www.reddit.com/r/singularity/comments/va133s/the_google_engineer_who_thinks_the_companys_ai/

  • reddit [any,they/them]
    ·
    edit-2
    2 years ago

    Another CS person weighing in here with a very slightly different take to some of the rest of this thread. I actually think "strong" AI is impossible. I think it is a malformed question simply because we do not understand what makes "consciousness" what it is. I'm sure as time progresses we will have increasingly convincing (and increasingly helpful!) AI assistants, but there is no amount of data that can make these programs into genuine "consciousness."

    Separately, I also think searching for it is a waste of time we do not have, and would still be a waste even if we did have it, but I am trying to put that aside while reading this.

    Gonna ramble here. There's just genuinely no such thing as a computer feeling anything, let alone "sad." There's no emergent consciousness somehow popping into existence, at least not from how computers exist in the present day. Maybe someone will come up with some magic architecture some day that changes my mind but a von Neumann machine is just not something I could ever believe would exhibit anything more than increasingly better attempts to reproduce what we have told it to reproduce.

    Any "consciousness" a computer displays is a human anthropomorphizing what is an increasingly performant Chinese room, and it's hard to blame us since empathy is something we evolved to do. Add to that the fact that we are so terrified of being alone that we have spent the past hundred years inventing stories about creating artificial life, multiply by marketing buzzwords, and what you're left with is engineers like this guy. People who are either so detached from human life they genuinely believe they've recreated it in a bit of linear algebra and statistics, or understand that if they say they've done that, their stock price will double because no one but them understands that is just linear algebra and statistics.

    I dunno. Maybe I've got terminal engineer brain, but a computer is a computer, and that's all it will ever be.

    EDIT: Reading more of the thread, glad to see I'm not actually deviating that much from y'all. Guess I'm more used to spaces where people take this shit seriously for no reason.

    • Llituro [he/him, they/them]
      ·
      2 years ago

      I think developing a consciousness is feasible in the long term, but either it will require so much development that it's not even an interesting project by the time it's possible, or it will turn out that a simulation of consciousness isn't any better at reasoning than us.

      • reddit [any,they/them]
        ·
        2 years ago

        I think that still requires a definition of "consciousness" though. Until that is sufficiently defined, any claim of a computer being conscious is just drawing a slightly different line between "very convincing random number generator" and "living brain made of silicon."

        • Alaskaball [comrade/them]A
          ·
          2 years ago

          Human brains are bio-electrical machines, they are deterministic in the same way a computer is, just an extremely powerful one. Thinking in a human is a physical process, so that physical process could theoretically be emulated exactly by a computer. The consciousness of such a thinking machine could not be determined.

          Isn't that the whole schick behind the smart A.Is in the halo video game series? Where their waifu-bots were created by cloning the brain of some smarty-pants and creating a botbrain based off their neural pathways and shit

    • sgtlion [any]
      ·
      edit-2
      2 years ago

      Nonsense, and the Turing test is simple enough for real-life practical purposes. Anything that appears to be consciousness, may as well be 'real' consciousness.

      I personally find the Chinese room is just a bizarre objection that saying an intermediate step somehow makes something impossible. There is zero difference between doing something and simulating doing it if all the inputs and outputs are the same.

        • sgtlion [any]
          ·
          2 years ago

          I feel like I agree with all of this and obviously current AI is relatively hollow. But that's the point, it's not successfully simulating a human mind and conversing, it's just replicating bits of speech in legible ways.

          The point of the focus on language is that it betrays an entire complex of the mind - If we can converse meaningfully about novel topics, then we can almost guaranteedly talk about an how an animal behaves, and replicate it, and come up with a physicality for doing so, etc. This is the fundamental hurdle and there's nothing magic about the human brain that allows, or anything magic about the digital form that disallows it.