https://fortune.com/2023/11/03/ai-bot-insider-trading-deceived-users/

  • Sphere [he/him, they/them]
    ·
    11 months ago

    This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That's it. This headline is complete nonsense.

    • Tommasi [she/her]
      ·
      11 months ago

      Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.

      There's no way someone who worked with developing current AI doesn't understand that what he's talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today's probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.

      Not that there aren't ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.

      • CrushKillDestroySwag
        ·
        11 months ago

        This is the common consensus among AI critics. People who are heavily invested in so-called "AI" companies are also the ones who push this idea that it's super dangerous, because it accomplishes two goals: a) it markets their product, b) it attracts investment into "AI" to solve the problems that other "AI"s create.

    • drhead [he/him]
      ·
      11 months ago

      AI papers from most of the world: "We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don't really know why or how it works."

      AI papers from western authors: "If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱"