• raven [he/him]
    ·
    10 months ago

    So these things where AI blatantly lies about something go around pretty frequently, and if you google them 2 months later they still haven't done anything about it. They're bound to be forced to do something legally sooner or later, so why not just hire like 20 people to y'know, do the bare minimum now before you're ordered to employ 1000? It seems pretty likely that's where this is headed.

    • WhatDoYouMeanPodcast [comrade/them]
      ·
      edit-2
      10 months ago

      One time I asked an AI for all the video games where health was illustrated by hearts. It told me legend of Zelda which is true but then it said kingdom hearts which is definitely a green bar. If I can stumble across blatant lies because my dumbass was talking to someone in a dating app, how the fuck could someone invest millions upon millions of dollars gambling on whether a LLM is going to give you accurate information without asking it something first?