• Parzivus [any]
    ·
    1 year ago

    ChatGPT is fun but also confidently wrong all the time. Using it as a knowledge source is one of the worst possible applications

    • Owl [he/him]
      ·
      1 year ago

      The GPT4 announcement included a benchmark on how often the different GPTs lie to you. ChatGPT was 40% of the time.

      GPT4 is down to 20% lies, which is definitely an improvement, but still a huge amount if you think about it at all.

      • UlyssesT [he/him]
        ·
        1 year ago

        Improved automated restaurant only does food poisoning 20% of the time. :so-true:

      • regul [any]
        ·
        1 year ago

        How often does Google lie?

      • MedicareForSome [none/use name]
        ·
        1 year ago

        However, they note that GPT-4 acts not confident in its answers it so people are less likely to fact-check results than with the previous version.

        The more accurate it is, the more likely its mistakes go unnoticed.

      • iridaniotter [she/her, they/them]
        ·
        edit-2
        1 year ago

        Yeah I asked it for sources on ekranoplans and I am pretty sure it made them up:

        • The Wing-In-Ground (WIG) Effect Craft: A Review

        • WIG Craft and Ekranoplans by Sergy Komarov

        edit: I asked it again and it gave me more fake resources but one of them included a real researcher on the subject. The singularity is here!!!

    • happybadger [he/him]
      ·
      1 year ago

      I now have multiple professors who warn against using it for essays. Students are trying to write phytopathology papers using it.

    • HexbearsDad [he/him]
      ·
      1 year ago

      You have to ask it not to do that explicitly. AI researchers call it "hallucinating" a response. You have to ask it to say "I don't know" if it doesn't know something.

    • CanYouFeelItMrKrabs [any, he/him]
      ·
      1 year ago

      The Bing one has different modes, creative, balanced, and precise. ChatGPT is seems on the creative side but an application using GPT does not need to be like that. On precise mode the Bing one regularly says it does not have enough info to say