This is going to wreck society even more.

Please, for the love of Marx, do not take ChatGPT at its word on anything. It has no intelligence. No ability to sort the truth from fiction. It is just a very advanced chat bot that knows how to string words together in a sentence and paragraph.

DO NOT BELIEVE IT. IT DOES NOT CITE SOURCES.

spoiler

I feel like my HS English/Science teacher begging kids to not use Wikipedia, right now.

But even Wikipedia is better than ChatGPT because. Wikipedia. Cites. Sources.

  • nat_turner_overdrive [he/him]
    ·
    2 years ago

    oh wild did the chatbot that only has outdated public information make a completely wrong guess and make up information that "seems" correct? crazy, it's almost like you should look at up-to-date documentation or open a ticket to get valid information

    somebody in a past chatbot thread said they were using it for dating advice, and...

    :yea:

    • Lovely_sombrero [he/him]
      ·
      2 years ago

      It is not about being outdated, it will just use random stuff from the internet that has some required keywords in it. And it will sound very confident. And as more people publish its stupid answers to make fun of it (or because they believe the answer is correct), that wrong text will be used as part of its source material for the next iteration of ChatGPT.

      • nat_turner_overdrive [he/him]
        ·
        2 years ago

        That is a good point. The important bit of the shit these doofuses used is not public, so the bot just guessed based on other companies and it was wrong and bad.

        The only time using a chatbot makes any sense is if you are going to vet the output and you know enough to spot incorrect shit. Nine tenths of users will not vet or know anything about the subject it's outputting on.

    • abc [he/him, comrade/them]
      ·
      2 years ago

      Yeah, they enabled ChatGPT for answering tickets on our end too - which I thought was fucking stupid because it isn't trained on anything relevant to our platform, so it just spits out fake ass instructions for things that almost sound right like "of course you can do X thing on our platform, here are the steps to do so:" but X is actually something explicitly forbidden.

      "Oh so we're implementing ChatGPT for saving time answering tickets? Cool! Is it trained on the 50,000 tickets/cases from Salesforce we've accumulated in the past 5 years since we switched CRMs?" No. "Oh, so what's the fucking point then? It won't get anything right and I certainly am never going to trust it enough to answer anything for me" Well, we hoped it would help the team with getting tickets solved.......

      Like I can almost forgive the customers who ask ChatGPT how to do X or Y on our platform & then schedule a phone call with me where I have to explain that our company has nothing to do with ChatGPT and if you wanted to do X or Y, you should've looked in the support center which has relevant articles/information about doing X and Y. But when we're training new people and actively have to tell them "yeah don't trust the ChatGPT thing that spits out a response for every new ticket, it has never been correct"?? lol

      • nat_turner_overdrive [he/him]
        ·
        2 years ago

        I am increasingly convinced that the description I have seen here before about ChatGPT is correct - MBAs trained a chatbot to talk like an MBA and they assume that means it's intelligent rather than that they are not

    • GreenTeaRedFlag [any]
      ·
      2 years ago

      It was deemed ableist to suggest computers prompts should not be oart of dating by some on this site