Permanently Deleted

  • Rev [none/use name]
    ·
    edit-2
    4 years ago

    But isn't intuition just analytical problem solving with shortcuts that are based on empirical probabilities, which even though not universal are statistically good enough for the task in question within the particular environment it is done in?

    And an argument could be made that you could have moral principles (good Vs evil) based entirely on the evolution of the observable natural world:

    simplicity-->complexity axis

    Vs

    muteness-->(self)awareness/consciousness axis.

    At the very least this wouldn't be a morality born entirety out of someone's imagination as have been all previous ones.

    • Zuzak [fae/faer, she/her]
      ·
      4 years ago

      I guess, but in any case I was quite impressed with the capabilities of Go AI, and since I didn't expect it it behooves me to reevaluate my view of AI and to be cautious about my assumptions.

      I don't buy any arguments about developing morality from observing the evolution of the natural world. If I am an inhuman intelligence, why should I have a preference about following the order of the natural world? I might just as well say that there is no need for me to act in accordance with that because nature will make it happen anyway. And if I did form a moral system based around complexity, I may well just nuke everything in order to increase entropy.

      As for "muteness vs self-awareness," we know for a fact that humans are ingrained with a self-preservation instinct because of evolution, which makes me skeptical of the idea. It's like, "well of course you'd say that, you're a human." Again, it's just a matter of the is-ought problem. If I asked why self-awareness is preferable to muteness, I imagine your answer would involve talking about the various things self-awareness allows you to do and experience - but then that raises the question of why those those things are good. When looking from the perspective of a totally alien intelligence, we cannot take anything for granted.

      Now, if AI were to develop values with a degree of randomness, and if multiple ones were to exist, we could see a form of rudimentary evolution or survival of the fittest, where AI that did not value survival did not last, and the remaining ones are ones that randomly assigned value to existence as opposed to non-existence. However, because the circumstances they would be adapting to would be vastly different from what humans adapted for, it's quite likely that they would develop some very alien values, which seem fairly impossible to predict.