:what-the-hell:

In the latest study, published in the journal Nature Neuroscience on Monday, scientists found an AI system called a semantic decoder can translate a person’s brain activity as they listened to a story, or imagined telling a story, into text.

The new tool relies partly on models similar to the ones that power the now-famous AI chatbots – OpenAI’s ChatGPT and Google’s Bard – to convey “the gist” of people’s thoughts from analysing their brain activity.

But unlike many previous such attempts to read people’s minds, scientists said the system does not require subjects to have surgical implants, making the process noninvasive.

...

Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.

For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.

  • Leon_Grotsky [comrade/them]
    ·
    edit-2
    1 year ago

    Seems pretty straightforward and not that scary to me. You scan your brain's activities to make a "data set" for the program to use as a rubric for interpreting the brain's activity. It's just as "mind reading" as ChatGPT is an "AI."

    Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.

    Of course, what happens if you ask any of these other LLMs a question for which they have no data?

    Pretty neat concept, IMO I wouldn't expect it to effectively communicate any better than having someone who knows you pretty well speaking on your behalf.

    • invalidusernamelol [he/him]
      ·
      1 year ago

      If it can get really good at interpreting generally, then allowing the person to "revise" the answer that would be awesome. Just get it 100% accurate at detecting something like "no, that's wrong Mr. Brainbot" or some sort of unique safe word that lets them classify the responses in real time.

      Then you could use that feedback to make the current response more accurate and future responses more accurate.