https://nitter.net/petergyang/status/1607443647859154946

  • RION [she/her]
    ·
    2 years ago

    This post is a great example of a place where AI cannot match humans, and won't be able to for a very long time or maybe ever. Computers completely shit the bed on interpreting ambiguous text from context clues (more on that here. If the problem were submitted in proper formula format to something like Wolfram alpha it would solve it no problem, but asking it conversationally gives results like this.

    • UlyssesT [he/him]
      ·
      edit-2
      2 years ago

      Even if/when computers start successfully and reliably understanding ambiguous text from context clues, I contend that what I said would still stand. There will be reductionists that want so very badly to declare the chatbot or other treat dispenser "true" intelligence in a way that belittles human intelligence at the same time. Instead of accepting the additional hurdles that such a machine would need to do to get there (plausible, with sufficient time, I believe), it's easier for them to denigrate human intelligence to try to rhetorically pull it down to the chatbot's level, now, for whatever reason. :lea-why:

      • RION [she/her]
        ·
        2 years ago

        For sure. The turing test and its consequences :thonk-cri:

        • UlyssesT [he/him]
          ·
          edit-2
          2 years ago

          Finding or making people that are more credulous and gullible is, technically, a way to make a machine more easily pass that test. :think-about-it: