I must confess I have a personal vendetta against Yudkowsky and his cult. I studied computer science in college. As an undergrad, I worked as an AI research assistant. I develop software for a living. This is my garden the LessWrong crowd is trampling.

  • muddi [he/him]
    ·
    11 months ago

    Jain philosophy has a nice approach to this. Pleasure and pain are symptoms of having a sense eg. touch, sight, taste, etc. plus they consider the mental faculty as a sense. There are living beings with only one sense eg. trees growing towards light, away from obstacles. There are being with many senses, including the mental faculty eg. humans. The more senses you have, or more generally the more complex your senses, the more complex your experience of pleasure and pain eg. dogs whining at high-pitched sounds humans can't hear.

    https://en.wikipedia.org/wiki/Jain_terms_and_concepts

    Anyways, the point is that if something has a sense, then maybe don't hurt it. If you really need to, then minimize it, either the pain you cause or the being you choose to hurt in terms of number of senses. A chicken has as many, or almost as many, senses as humans. Don't hurt it. But plants don't suffer as much, probably. That's why Jains are vegetarian (they have to eat something, so best to minimize suffering to do so), and even then only eat parts of plants which don't extinguish lives eg. fruits and leaves which are meant to be picked or can at least be regrown.

    An AI arguably has no senses, or only a version of the mental faculty in which it hallucinates mental anguishes (pain not tied to any physical sensation). Turning off an AI is not equivalent to slaughtering an animal. It's more like killing a plant.