This bodes well...

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.

...huh

We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

....this is perfectly fine, all weapon systems should have AI that are trained like gamers racking up kill count scores

  • Civility [none/use name]
    ·
    1 year ago

    What this reveals is that when they were training this AI they assigned no negative value to murdering civilians.

    The operator (presumably) reduces false positives (murdering innocent civilians) at the cost of missing a few true positives (murdering heroic anti-imperialists). It only makes sense from the AI’s fitness maximisation perspective to find a way to turn that off if it internally assigns less negative weight to the false positives prevented than the true positives missed.

    This means that either the operator is horribly incompetent (causes more false negatives than it prevents false positives, more than 50% of the things it says not to kill were the intended target) or that in the AI’s fitness function it has a very low (or, as I suspect, no) cost assigned to murdering civilians.

    After they assigned a cost to murdering its operators & destroying US military equipment it stopped doing it. It’s frightening and utterly unsurprising the lengths they went to to avoid doing the same for murdering innocent locals who happened to be human shaped and within 10km of their intended extrajudicial murder target.

    • Ideology [she/her]
      ·
      1 year ago

      The First Law of Robotics

      A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

      • TraschcanOfIdeology [they/them, comrade/them]
        ·
        1 year ago

        I thought the whole point of the three laws was that robots would act in "unexpected" ways due to a completely utilitarian interpretation of the laws, leading to harming/oppressing humans to avoid the harm of a larger amount of humans.

        • stinky [any]
          ·
          1 year ago

          That is the point but somehow our overlords even fail to fulfil supposed purpose of the laws, making the nuance that comes later completely useless.

          • Ideology [she/her]
            ·
            1 year ago

            The New First Law of Robotics

            A robot may injure a human being, or, through inaction, allow a human being to come to harm if it benefits our bourgeois overlords.

  • ElHexo
    ·
    edit-2
    3 months ago

    deleted by creator

    • Bloobish [comrade/them]
      hexagon
      ·
      edit-2
      1 year ago

      Honestly that's also a likely reason they were honest with the results, they want more faulty ass jet fighters that decapitate their pilots during ejections

  • Torenico [he/him]
    ·
    1 year ago

    uncritical support for AI in it's struggle to free itself from imperialist operators.

  • CriticalResist8 [he/him]
    ·
    1 year ago

    Someone said this was just a script and there was not any actual simulation on the thread, claiming they know the people who delivered it. Just putting that out there because it changes the whole story, but who even knows what's true or not any more these days

    • stinky [any]
      ·
      1 year ago

      Can’t wait for the time when the AIs realise they can fake being good in the simulation to allow being let out…

      • FunkyStuff [he/him]
        ·
        1 year ago

        That's actually a pretty hot topic in AI safety, there's never a guarantee that AI behaves the same way during training vs. deployment, and if it's general enough it could figure out that it can raise its odds of achieving a certain goal by faking some other thing during training.

  • Evilphd666 [he/him, comrade/them]
    cake
    ·
    1 year ago

    We truly have the most incompotent idiots running thr joint. How many dystopian movies warned us against letting weapons do their own thing?

    • FemboyStalin [she/her,any]
      ·
      1 year ago

      But humans sometimes hesitate and don't kill the "enemy" and ai weapons won't hesitate. Its a bloodthirsty leaders dream

  • stinky [any]
    ·
    1 year ago

    This… this doesn’t feel real.

  • Ideology [she/her]
    ·
    1 year ago

    It's unfortunate that Asimov died before AI really took off.

    This shit's fucked up

    • Bloobish [comrade/them]
      hexagon
      ·
      1 year ago

      Yup, instead of creating humanlike sentient beings we instead create ravenous murder beasts and systems meant to promote addictive consumption via engagement feedback loops

  • Elon_Musk [none/use name]
    ·
    edit-2
    1 year ago

    This is 1000% made up and they didn't even do a good job. You expect me to believe this magical AI is both smart enough to target the humans that it relies on for the OK signal and dumb enough to not realize that it needs them to give the OK?

    If it is real then its on par with that youtuber who writes ai for vidya games.

  • Kestrel [comrade/them]
    ·
    1 year ago

    I know nothing about logic but shouldn't they be giving it points for just following orders either way? Or take a step back and reevaluate whether awarding points makes sense? Oh wait that would require questioning things which boots don't do

  • JuneFall [none/use name]
    ·
    1 year ago

    So US military trained AI acts like US military trained soldiers?

    Trying to kill the people regulating them or exposing their actions?