:what-the-hell:
In the latest study, published in the journal Nature Neuroscience on Monday, scientists found an AI system called a semantic decoder can translate a person’s brain activity as they listened to a story, or imagined telling a story, into text.
The new tool relies partly on models similar to the ones that power the now-famous AI chatbots – OpenAI’s ChatGPT and Google’s Bard – to convey “the gist” of people’s thoughts from analysing their brain activity.
But unlike many previous such attempts to read people’s minds, scientists said the system does not require subjects to have surgical implants, making the process noninvasive.
...
Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.
For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.
More than a slim silver lining, according to the summary they can't construct intelligible results unless the subject is cooperative during training and testing. It's not exactly something you can do without at least people's knowledge by the sounds of it.
Indeed and I would think there should be enough intrinsic differences in individual cortical development that without high quality supervised training prediction will forever be impossible, that is, zero or few shot learning just wouldn't work.