• The US is among countries arguing against new laws to regulate AI-controlled killer drones.
  • The US, China, and others are developing so-called "killer robots."
  • Critics are concerned about the development of machines that can decide to take human lives.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China's People's Liberation Army's (PLA) numerical advantage in weapons and people.

"We'll counter the PLA's mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat," she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

"Individual decisions versus not doing individual decisions is the difference between winning and losing — and you're not going to lose," he said.

"I don't think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves."

    • JohnBrownNote [comrade/them, des/pair]
      ·
      edit-2
      1 year ago

      as much as the tenants of the WTC were ghouls, getting the whole pentagon instead would've been dope and would've spared us all rudy giuliani's later career.

  • FourteenEyes [he/him]
    ·
    1 year ago

    It's not like the drones don't kill mostly civilians as it is, why not just make the war crimes completely automated?

    • hexaflexagonbear [he/him]
      ·
      1 year ago

      Publishing statements like "the robot followes the rules of engagement" when you know the rules of engagement are "make meatbag die"

  • emizeko [they/them]
    ·
    edit-2
    1 year ago

    of course it is, capital wants a way to slaughter millions of climate refugees at the border without too much trauma for its death squads

    • BurgerPunk [he/him, comrade/them]
      ·
      1 year ago

      "Build the Wall" was the soft intro to eco fascism. That's why the wall is being built by the "adults in the room party." Thats why the kids are still in cages. They know all about climate change and their answer is to slaughter everyone who tries to cross the border.

  • Tachanka [comrade/them]
    ·
    edit-2
    1 year ago

    now they can turn murders they always intended to commit into "oopsy it was a software bug" manslaughter cases that they blame on random (admittedly also guilty) tech chumps instead of top brass. kinda like how the Abu Ghraib torturers sometimes got convinced but the people overseeing it all got ignored.

  • THC
    ·
    1 year ago

    "I don't think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves."

    This article is terribly written so I'm not sure how to interpret this statement, which is not given much context. Are they admitting that China wouldn't use autonomous murder bots? So they're basically saying "those Chinese won't let computers kill with reckless abandon but we sure as shit will!"

    • Nakoichi [they/them]
      ·
      1 year ago

      It's saying that our adversaries would not put that limitation on themselves, it's a roundabout way of implying that they would indeed use them.

      • THC
        ·
        edit-2
        1 year ago

        Okay yeah you're right. I was getting tripped up by the quote before it saying this would allow them to counter the PLA's numbers advantage because to me it implies that they think the PLA wouldn't use autonomous killing.

  • FunkyStuff [he/him]
    ·
    1 year ago

    They've been moving in this direction for a while. Haven't they been developing autonomous target selection systems and the like for a long time?

  • Marxism-Fennekinism@lemmy.ml
    ·
    1 year ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

  • Teekeeus
    ·
    edit-2
    2 months ago

    deleted by creator

  • Evilsandwichman [none/use name]
    ·
    1 year ago

    Ah yes, because a government that massacres giant numbers of civilians and protects our war criminals should absolutely delegate the decision to kill to AI.

    It's not like we claim that our already high casualties are normal in war time.

    It's not like a lot of people aren't already viciously Sinophobic and would be completely fine with war crimes against people from China.

    Yeah I'm sure this won't lead to absolute horrors.

  • RyanGosling [none/use name]
    ·
    1 year ago

    Mungus brains will not piss themselves over this. They are afraid their sex robots will kill them in their sleep and take over the world, not drone strikes killing countless families because they were being brown too hard

  • WayeeCool [comrade/them]
    ·
    edit-2
    1 year ago

    Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

    "Individual decisions versus not doing individual decisions is the difference between winning and losing — and you're not going to lose," he said.

    The fkd up part is I understand exactly where Pentagon planners and the engineers at various contractors are coming from. Remember when the world discovered the US had unmanned mini stealth bombers thanks to the Iranian military jamming the uplink of one and forcing it down?

    Iran RQ-179 Incident

    To be clear, the White House and Pentagon are lying in these public statements about the reason. It isn't due to China outnumbering the US, which is why their logic of we can out manufacture China on Terminators makes zero sense. Before I go on, let me state that I whole heartedly believe this is going to make the world a worse place and I wish it wasn't happening.

    The US is lying about the why, they have to lie because they can't publicly announce recently identifying a massive flaw in current US doctrine surrounding unmanned combat platforms. The motivation for this is from what the Pentagon has witnessed in Ukraine, specifically the interactions between unmanned platforms and the type of electronic warfare a US peer-rival can bring to the table. Currently when an unmanned platform gets jammed and put into unscheduled communication blackout, it fail-safes like puppet with the strings cut by holding position then returning to base when fuel gets low. What the US has realized is if they ever end up in a shooting war with a peer-rival, unmanned combat platforms will need to fail-deadily like a creature of war that had its leash unexpectedly cut.

    This change in failure mode is going to be needed for the Loyal Wingman unmanned fighters currently being built to team like a hunting dog with US manned fighters. In a conflict with Russia or China, there will be heavy electronic warfare and unmanned combat platforms will need to enter a fail-deadly mode whenever they lose uplink while actively engaged in combat.

    • NPa [he/him]
      ·
      1 year ago

      so a drone takes off from some base somewhere, loses wifi for a second, then starts dropping ordinance on whoever is closest? sounds good, no issue here, might as well equip a few with nukes, it'll be fine 👍 👍 👍

      • WayeeCool [comrade/them]
        ·
        edit-2
        1 year ago

        sounds good, no issue here, might as well equip a few with nukes, it'll be fine

        I know you are joking... but this was what Northrop Grumman originally proposed for the B21 Raider. Unmanned nuclear armed long-range stealth bomber. Luckily someone in the US Air Force was able to restore sanity before the B21 Raider program was finalized. Instead we are going to get so-called Loyal Wingmen like the MQ-28 Ghost Bat and XQ-58 Valkyrie, fully automated multirole strike fighters with the flight characteristics and armament of an F16.

        so a drone takes off from some base somewhere, loses wifi for a second, then starts dropping ordinance on whoever is closest?

        Not far off tbh, which is why I don't like that this is where the US is headed. Won't be randomly firing and dropping muntions but will be doing so with the decision making accuracy of "artificial intelligence". So, something like 96% accuracy on differentiating a civilian airliner from a hostile enemy jet or an elementary school from a military base. When cut off from the remote human operators, the mission will go on.

        Militaries have a thing for building fail-deadly into their doctrines.