• gerikson@awful.systems
    cake
    ·
    11 months ago

    I didn't read this but I confident it can be summarized as "how many hostile AGIs can we confine to the head of a pin?"

  • Sailor Sega Saturn@awful.systems
    ·
    11 months ago

    I remember role playing cops and robbers as a kid. I could point my finger and shout "bang bang I got you" but if my friend didn't pretend to be mortally wounded and instead just kept running around there's really nothing I could do.

  • Evinceo@awful.systems
    ·
    11 months ago

    Nobody tell these guys that the control problem is just the halting problem and first year CS students already know the answer.

    • David Gerard@awful.systemsM
      ·
      11 months ago

      remembering how Thiel paid Buterin to drop out of his comp sci course so he spent all of 2018 trying to implement plans for Ethereum that only required that P=NP

    • kuna@awful.systems
      ·
      11 months ago

      On a similar note, Yud's decision theory that hinges on an AI (presumably a Turing Machine) predicting what a human (Turing-Complete at the least) does with 100% accuracy.

      • self@awful.systemsM
        ·
        11 months ago

        …huh. somehow among all the many things wrong with TDT, I never cottoned to the fact that it just reduces to the halting problem

        are rats just convinced that Alan Turing never considered what if computer but more complex? cause there’s a whole branch of math dedicated to computability regardless of the complexity of the computation substrate, and Alan helped invent it. of course they don’t know about this because they ignore the parts of computer science that disagree with their stupid ideas

        • kuna@awful.systems
          ·
          11 months ago

          Actually I might have done goofed with that one; now that I think of it, if you assume some jackoff amount of computing power then a human brain (assuming nothing uncomputable happens there, so sad Penrose noises) could be simulated from first principles for a limited amount of time, no actual proof of possible future outcomes needed. This still leaves the problem of how exactly do you get all the data for that (and I think any uncertainity would require an exponential increase in paths you have to simulate), especially without killing the human in question.