Megan McArdle, comrades!

I just can't even believe how stupid this one is. It's not that I'm an AI booster (indeed, i'm rather pessimistic), but real shit, Dune is your model for a future humanity?

Brava Megan, Brava.

https://archive.is/FwQ5r no paywall, thanks @awoo

:posad

  • Leon_Grotsky [comrade/them]
    ·
    edit-2
    2 years ago

    "Banning" "AI" did not save humanity in Dune

    Waging a religious war against inhuman social systems, killing millions in the process, and re-asserting their humanity over these technologies of social control "saved" humanity and shit massively sucked afterwards anyways.

    What if smart machines do so themselves, having decided they’d be better off without people around clogging up the planet?

    goddamn I fucking loathe "AI" discourse

    E) From the Article Comments:

    The risk isn't so much that AI will take over like a power-mad dictator but that it will infiltrate our own decision-making processes in ancillary ways until we no longer have a clear idea just who is making the decisions and why. AI performs on the basis of data which is not disclosed, and if it were disclosed is far too voluminous for humans to encompass, and hence they are rendered unable to critique the AI decisions they receive.

    This is actually one of the better ways I've seen "The God of the Machine Logic" described so kudos to Judah ben Hur

    • CthulhusIntern [he/him]
      ·
      2 years ago

      We're not even close to artificial general intelligence, AI able to solve problems that it wasn't specifically programmed to solve. And even that is still a long way from AI capable of making decisions like what she says.

      • Frank [he/him, he/him]
        ·
        2 years ago

        Don't worry, currently existing algorithms are already quite capable of making decisions based on logic that is a black box from a practical standpoint. And since most people don't understand what an algorithm is and think that computers are perfect unbiased logic wizards the humans who are supposed to be using these machines often assume that whatever garbage in garbage out bullshit the algorithm produces is the objective, unvarnished truth. There's a :liberalism: book from like a decade+ ago called Weapons of Math Destruction which talks about then current programs, like NYC's Crimestat, that were already causing serious injustices because people were both abusing algorithmic data processing and decision making, and also attributing to the algorithms a degree of intelligence and objectivity it did not merit.

        The same thing is going to happen with the various large model neural nets - People who have never heard of the "Chinese Room" or the "Turing Test" in their lives are going to replace critical workers with these things and it's going to sorta kinda function just long enough for some truely breathtaking disaster to unfold.