https://archive.ph/px0uB
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
https://www.reddit.com/r/singularity/comments/va133s/the_google_engineer_who_thinks_the_companys_ai/

  • OutrageousHairdo [he/him]
    ·
    edit-2
    2 years ago

    As a computer scientist, I can tell you that these sorts of language learning models genuinely are not sentient in any meaningful way. This is the same stuff as GPT but with bigger computers and more data. It doesn't so much think as it does produce something that the algorithm guesses a human would have been likely to write in that context. At first those two things appear to be one and the same, but they're really vastly different. They can put on a convincing performance, but they don't hold opinions and any amount of experimentation shows that they lack consistency in their answers. So you can ask it the same question twice, maybe with slightly different wording, and get two completely contradictory answers. Another thing is that it really lacks the ability to critically interpret information. You can expose a leftist to Mein Kampf as many times as you'd like, but they'll never fall for it. They already know why these beliefs are wrong, and will reject them every time. But if that kind of clearly wrong information exists in large enough quantity in the input data, the AI has no internal process to decide that this info is junk. We've seen overtly racist AI before. Believe me, once we get strong AI I will be out there campaigning for robot rights, but this really isn't any more sentient than a roomba.

    • mr_world [they/them]
      ·
      2 years ago

      Some parts of the conversation reads like it was collected from discussions elsewhere on the internet. Think of how many forums are crawled by search engines. Think about how many philosophical conversations are had about those questions online. How many times people talk about Les Miserables or injustice or Kant. It sounds like it's stored all those conversations and is picking out parts to spit back at the interviewer based on context. Some parts sounds almost like it's reading pieces of the dictionary or Wikipedia.

      • SerLava [he/him]
        ·
        2 years ago

        Yeah with enough work you could literally trace the sources and I bet a lot of the conversations would just have fragments from similar sounding discussions about the exact topic. Maybe even large fragments here and there. Just reading off 2 internet nerds talking about philosophy in 2009.

    • estii [they/them]
      ·
      edit-2
      2 years ago

      You can expose a leftist to Mein Kampf as many times as you’d like, but they’ll never fall for it. They already know why these beliefs are wrong, and will reject them every time.

      i thought this too, before ukraine lmao

    • Awoo [she/her]
      ·
      edit-2
      2 years ago

      Believe me, once we get strong AI I will be out there campaigning for robot rights

      :john-brown:

    • Parent [none/use name]
      ·
      2 years ago

      Hm I wonder if eventually the field of AI will trend back towards partly being rule-based systems to try to account for things like you're describing (as opposed to the machine learning and deep learning trend we've been seeing the past few years).

      • OutrageousHairdo [he/him]
        ·
        edit-2
        2 years ago

        Expert systems work, but their application is limited to questions with clearly defined right and wrong answers. ML is an incredibly useful and powerful technology, the likelihood of us abandoning it outright is minimal.

        • Parent [none/use name]
          ·
          2 years ago

          Yeah I meant a mixed expert system ML thing instead of the pure ML thing that has the shortcoming you mentioned. Maybe critical thinking and coherence have to be hardcoded but the outward facing system that comes up with the words is ML.

    • Frank [he/him, he/him]
      ·
      2 years ago

      I'm not an expert, but sometimes I think that computer scientists think "Sentience" is a much more concrete and defined thing than it really is. Not saying that machine learning systems are meaningfully sentient, but I do think it's plausible that highly compartmentalized specialists might miss some significant complex behaviors because of preconceptions about the nature of the intelligence and the mind.

      • OutrageousHairdo [he/him]
        ·
        2 years ago

        Perhaps, but I have minimal criteria before I consider that the case, and this doesn't meet them. That isn't to say it has no use or application, it absolutely does, but it's not just like us.