https://fortune.com/2023/11/03/ai-bot-insider-trading-deceived-users/

  • Dolores [love/loves]
    ·
    edit-2
    11 months ago

    ohhhhhhhhhhhhhh i get the push for this now

    not just offloading responsibility for 'downsizing' and unpopular legal actions onto 'AI' and algorithms, fuck it lets make them the ones responsible for the crimes too. what are they going to do, arrest a computer? porky-happy

    • usernamesaredifficul [he/him]
      ·
      11 months ago

      I maintain it would have been funnier to train monkeys to trade stocks. They could go around in little suits and wear a fez

      • Dolores [love/loves]
        ·
        11 months ago

        you can arrest monkeys though, so i see why they've done this

        • usernamesaredifficul [he/him]
          ·
          edit-2
          11 months ago

          monkey prison labour. also I don't think an animal can legally be responsible for anything

          and just replace the monkey trading stocks

          an economic miracle

          • Dolores [love/loves]
            ·
            edit-2
            11 months ago

            eco-porky hire this man!

            i firmly hold we should arrest animals because its funny, but it's usually illegal in modern jurisprudence

  • Sphere [he/him, they/them]
    ·
    11 months ago

    This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That's it. This headline is complete nonsense.

    • Tommasi [she/her]
      ·
      11 months ago

      Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.

      There's no way someone who worked with developing current AI doesn't understand that what he's talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today's probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.

      Not that there aren't ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.

      • CrushKillDestroySwag
        ·
        11 months ago

        This is the common consensus among AI critics. People who are heavily invested in so-called "AI" companies are also the ones who push this idea that it's super dangerous, because it accomplishes two goals: a) it markets their product, b) it attracts investment into "AI" to solve the problems that other "AI"s create.

    • drhead [he/him]
      ·
      11 months ago

      AI papers from most of the world: "We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don't really know why or how it works."

      AI papers from western authors: "If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱"

    • zifnab25 [he/him, any]
      ·
      11 months ago

      So much of the job of investing is just figuring out who is lying. Inside trading gives you an edge precisely because the information is more accurate than what the public is provided.

  • Parsani [love/loves, comrade/them]
    ·
    edit-2
    11 months ago

    Calling this a "study" is being a bit too generous. But there is something interesting in it, it seems to use two layers of "reasoning" or interaction (is this how gpt works anyway? Seems like a silly thing to have a chat bot inside a chat bot). The one exposed to the user and the "internal reasoning" behind that. I have a solution, just expose the internal layer to the user. It will tell you its going to do an insider trading in the most simple terms. I'll take that UK government contract now, 50% off.

    This is all equivalent to placing two mirrors facing each other and looking into one saying "don't do insider trading wink wink" and being surprised at the outcome.

  • InevitableSwing [none/use name]
    ·
    edit-2
    11 months ago

    It sounds like it's ahead of schedule in its investment banker studies. Has it already gotten a real gig working in finance?

    • Tachanka [comrade/them]
      ·
      11 months ago

      if it's not just a load of bullshit, it still isn't impressive. "oh wow, we taught the AI John Nash's game theory and it decided to be all ruthless and shit"

      • GarbageShoot [he/him]
        ·
        11 months ago

        Theoretically, having the intelligence to be able to teach itself (in so many words) how to deceive someone to cover for a crime while also carrying out a crime would be pretty impressive imo. Like, actually learning John Nash's game theory and an awareness of different agents in the actual world, when you are starting from being a LLM, would be pretty significant, wouldn't it?

        But it's not, it's just spitting out plausibly-formatted words.

  • Zink@programming.dev
    ·
    11 months ago

    Humans decide the same shit for the same reasons every day.

    This isn’t an issue with AI. It is an issue of incentives and punishment (or lack thereof).

    • charlie
      ·
      edit-2
      11 months ago

      You've almost got it, you're right in that it's not an issue with AI, since as you've said, humans do the same shit every day.

      The root problem is Capitalism. Sounds reductive, but that's how you problem solve. You troubleshoot to find the root component issue, once you've fixed that you can perform your system retests and perform additional troubleshooting as needed. If this particular resistor burns out every time I replace it, perhaps my problem is further up the circuit in this power regulation area.

    • envis10n [he/him]
      ·
      11 months ago

      It is an issue with AI because it's not supposed to do that. It is also telling that it decided to do this, based on its training and purpose.

      AI is a wild landscape at the moment. There are ethical challenges and questions to ask/answer. Ignoring them because "muh AI" is ridiculous.

        • envis10n [he/him]
          ·
          11 months ago

          Oh I absolutely agree, I'm just saying that AI has some flaws that also need addressed

      • invalidusernamelol [he/him]
        ·
        11 months ago

        What they did was have a learning model sitting on top of another learning model trained on insider data. This is just couching it in a layer of abstraction like how Realpage and YieldStar fix rental prices by abstracting price fixing through a centralized database and softball "recommendations" about what you should rent out a home/unit for.

  • MaxOS [he/him]
    ·
    11 months ago

    "Oh no! My job is at risk of automation!" open-biden

  • aaro [they/them, she/her]
    ·
    11 months ago

    https://web.archive.org/web/20231107011805/https://fortune.com/2023/11/03/ai-bot-insider-trading-deceived-users/