• HumanBehaviorByBjork [any, undecided]
    ·
    1 year ago

    Say you run a charity and want to create and distribute an AI bot that will teach mathematics to underprivileged schoolchildren. That’s great, but the bot will encounter some obstacles. In some jurisdictions, it may need to pay licensing and registration fees. It may need to purchase add-ons for recent innovations in teaching. If it operates abroad, it may wish to upgrade its ability to translate. For a variety of reasons, it might need money.

    This is just poorly conceived science fiction. An AI bot? You mean the programs we have now that respond to prompts convincingly but incorrectly? How are those going to autonomously teach children, perhaps one of the least automatable jobs we have? Why does the bot need to pay the fees? It's a program. It's owned by someone. They can pay the fees. Frankly if you have a program that can teach, filling out paperwork seems like the easy part.

    All those transactions would be easy enough if AIs were allowed to have bank accounts. But that’s unlikely anytime soon. How many banks are ready to handle this? And imagine the public outcry if there were a bank failure and the government had to bail out some bot accounts. So bots are likely to remain “unbanked” — which will push them to use crypto as their core medium of exchange.

    IT IS NOT A PERSON. IT IS NOT A FACSIMILE OF A PERSON. IT IS A PROBABILISTIC SOUP OF THINGS THAT PEOPLE SOMETIMES SAY. THERE IS NO REASON TO TREAT THE COMPUTER PROGRAM LIKE A PERSON.

    • HumanBehaviorByBjork [any, undecided]
      ·
      1 year ago

      Also we already have computer programs that autonomously move money around! That's what most of the finance industry is built on! This moron has completely lost the plot, he has no idea what's going on! Holy shit how can you be a literal economics professor at GMU and think that banks aren't ready to let computer programs play with money?

      Furthermore, possibly for liability reasons (do you want to be indicted in some foreign country because of something your bot said or did?), many of these bots won’t be owned at all.

      Hey, that sounds bad! Liability isn't a just a legal fiction! It inscribes a concept of moral responsibility for one's actions into the law! If you create something that does harm, there's a good chance that you should be held liable for it!

      Remember the DAO, the Decentralized Autonomous Organization? I’ve yet to see a human-run DAO succeed at significant scale, perhaps because humans need more authority or because the DAO is just hidden human authority in another guise (e.g., one person controls 51% of the votes). The bots already have read about DAOs and their failings, and they may give them another go. In the meantime the bots will train themselves to learn how to make their DAOs work, and bot “corporations” may end up as more democratic than their human counterparts.

      "So, this idea has never worked, but if we imagine a magical genius program, that is very very very smart, it could be so smart that it could make a bad idea good."

    • InevitableSwing [none/use name]
      hexagon
      ·
      1 year ago

      How are those going to autonomously teach children, perhaps one of the least automatable jobs we have?

      I'm only partially joking when I say that tech bros think tech is akin to magic and the only thing stopping it from solving every problem is that people get in the way.

      • HumanBehaviorByBjork [any, undecided]
        ·
        1 year ago

        He's not even a tech guy! He's an econ guy! His reasoning is "well, the people saying AI is magic have lots of money, and they wouldn't have lots of money if they weren't right. The market has spoken.

      • UlyssesT
        ·
        edit-2
        18 days ago

        deleted by creator

    • UlyssesT
      ·
      edit-2
      18 days ago

      deleted by creator