Good episode and apparently interesting book discussed meow-floppy

Incidental thoughts:

When you search for a word, you don't have a map of statistical likelihood of the word appearing, you start from the concept/shape of what you want to say.

Nice thought about "you wouldn't let dolphin be a judge, despite them being intelligent"

Why won't ai believers relay all their life to chatgpt and stop fearing death? You have immortalized yourself allegedly then.

Also interesting thought about godel incompleteness applying between matrix operation and natural neural networks

  • MerryChristmas [any]
    ·
    7 months ago

    I agree with you on the empathy issue, but here's where I hesitate to say it should be rejected outright:

    I've had some interesting conversations with myself using GPT4 as a sort of funhouse mirror, and even though I recognize that it's just a distorted reflection... I'd still feel guilty if I were to behave abusively towards it? And I think maybe that's healthy. We shouldn't roleplay engaging in abuse without real-world consequences if for no other reason than because it makes us more likely to engage in abuse when there are actual stakes.

    In this scenario, the ultimate object of my empathy is my own cognitive projection, but the LLM is still the facilitator through which the empathy happens. While there is a very real danger of getting too caught up in that empathy, isn't there also a danger in rejecting that empathetic impulse or letting it go unexamined?

    • plinky [he/him]
      hexagon
      ·
      edit-2
      7 months ago

      The problem as i see it (and im not a psychologist or whatever) is you dont have feeling towards your mirror for example, your brain adapted to your reflection not being a real thing at like 2-3 years.

      Brain doesn't have natural defenses against empathising with llm (even with eliza people were ready to go tell the program their secrets). And feeling aren't logical (as in, you can know its bullshit and still feel some fulfillment from such conversations). They will (in the podcast) prolly discuss what author thought of that phenomenon with eliza, but i can see on a large scale that being a problem with atomized society, that noticeable amount of people will drop out into llm fantasies.

      I don't think there is a danger in rejecting empathy. I like some plush toys from my childhood, i would be hurt if something happened to them, i wouldn't hurt them, but i also don't empathize with them.