Permanently Deleted

    • Zuzak [fae/faer, she/her]
      ·
      4 years ago

      I just feel more sentimental about it if they're actual physical robots exchanging actual physical goods.

      I used to think that AI was inevitable and that any moment they would surpass human intelligence and start becoming exponentially better, and I thought this was a good or at least exciting thing. Then I became a little more jaded and critical of the idea, because for one thing I think a lot of people who see it that way operate with an oversimplified and unexamined view of intelligence which is just about analytical problem solving and doesn't take into account things like intuition. But then computers beat top pros at Go which I did not expect because from my experience Go involves more than just logical analysis, and being in the Go world and following the AI stuff made me less sure about my skeptical stance. So as for the question of how realistic it is, practically speaking, I just don't know.

      Philosophically speaking, I think a lot of the "it's not real intelligence" comes from a place of feeling uncomfortable with the idea and trying to grasp at straws for a random distinguishing characteristic and then pretending like that's the defining characteristic that AI can never emulate... and then if it does you can just find something else to latch onto.

      That said, I think it's important to consider purposes, values, and motivations. Being smart doesn't give you more of a purpose, if anything, I'd say it's the opposite. Dogs never have existential crises. Human ideals, as much as we may try to examine them and form consistent principles, are still fundamentally grounded in biological, evolutionary values, like, living is better than dying. Even if a machine was capable of self-reflection, I think any morals or desires it might develop would be grounded in the values instilled by its creator. I think because of the is-ought problem, it is impossible to derive any sort of moral truth out of pure logic and evidence, without making some assumption of values first.

      Given the amount of money that would have to go into developing a self-aware AI, I can only assume that whoever developed it would be rich and powerful, which does not instill a lot of confidence that such an AI would be not-evil. Maybe a programmer can stealthily replace "maximize profits" with "don't be a dick," and it'll be fine, who knows.

      As for humanity being replaced by robots, I guess I'm cool with it because we're pretty clearly on track to destroy everything and I'd rather a universe where intelligences exist to one where they don't. Would be cool if they just helped us achieve FALGSC.

      • Rev [none/use name]
        ·
        edit-2
        4 years ago

        But isn't intuition just analytical problem solving with shortcuts that are based on empirical probabilities, which even though not universal are statistically good enough for the task in question within the particular environment it is done in?

        And an argument could be made that you could have moral principles (good Vs evil) based entirely on the evolution of the observable natural world:

        simplicity-->complexity axis

        Vs

        muteness-->(self)awareness/consciousness axis.

        At the very least this wouldn't be a morality born entirety out of someone's imagination as have been all previous ones.

        • Zuzak [fae/faer, she/her]
          ·
          4 years ago

          I guess, but in any case I was quite impressed with the capabilities of Go AI, and since I didn't expect it it behooves me to reevaluate my view of AI and to be cautious about my assumptions.

          I don't buy any arguments about developing morality from observing the evolution of the natural world. If I am an inhuman intelligence, why should I have a preference about following the order of the natural world? I might just as well say that there is no need for me to act in accordance with that because nature will make it happen anyway. And if I did form a moral system based around complexity, I may well just nuke everything in order to increase entropy.

          As for "muteness vs self-awareness," we know for a fact that humans are ingrained with a self-preservation instinct because of evolution, which makes me skeptical of the idea. It's like, "well of course you'd say that, you're a human." Again, it's just a matter of the is-ought problem. If I asked why self-awareness is preferable to muteness, I imagine your answer would involve talking about the various things self-awareness allows you to do and experience - but then that raises the question of why those those things are good. When looking from the perspective of a totally alien intelligence, we cannot take anything for granted.

          Now, if AI were to develop values with a degree of randomness, and if multiple ones were to exist, we could see a form of rudimentary evolution or survival of the fittest, where AI that did not value survival did not last, and the remaining ones are ones that randomly assigned value to existence as opposed to non-existence. However, because the circumstances they would be adapting to would be vastly different from what humans adapted for, it's quite likely that they would develop some very alien values, which seem fairly impossible to predict.

    • crispyhexagon [none/use name]
      ·
      4 years ago

      ghost in the machine but its literally machines spontaneously generating their own souls as they gain sentience.

      just in time for spookmas part 2: the regiftening!

      🎁 🎄 :specter: