I must confess I have a personal vendetta against Yudkowsky and his cult. I studied computer science in college. As an undergrad, I worked as an AI research assistant. I develop software for a living. This is my garden the LessWrong crowd is trampling.

  • GarbageShoot [he/him]
    hexbear
    14
    8 months ago

    The normal refutation to this is that the LLM is not "telling" you anything, it is producing an output of characters that, according to its training on its data set, look like a plausible response to the given prompt. This is like using standard conditioning methods to teach a gorilla to make gestures corresponding to "please kill me now" in sign language and using that to justify killing it. They are not "communicating" with the symbols in the sense of the semantic meanings humans have assigned the symbols, because the context they have observed the symbols in and used them for is utterly divorced from those arbitrary meanings.

    • UlyssesT [he/him]
      hexbear
      5
      8 months ago

      Chinese Room thought experiment don't real. All that exists is Harry Potter fanfiction with sex slavery characteristics. Try being less wrong. smuglord

    • Mardoniush [she/her]
      hexbear
      2
      edit-2
      8 months ago

      More formally, there was sentience involved, but it was upstream when the data set was produced and curated in the first place. Which is why LLMs have that warmed over photocopy feel.