:what-the-hell:
In the latest study, published in the journal Nature Neuroscience on Monday, scientists found an AI system called a semantic decoder can translate a person’s brain activity as they listened to a story, or imagined telling a story, into text.
The new tool relies partly on models similar to the ones that power the now-famous AI chatbots – OpenAI’s ChatGPT and Google’s Bard – to convey “the gist” of people’s thoughts from analysing their brain activity.
But unlike many previous such attempts to read people’s minds, scientists said the system does not require subjects to have surgical implants, making the process noninvasive.
...
Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.
For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.
main comment:
For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.
I see a slim silver lining here. fMRI is incredibly noisy and bulky, and the inability of this procedure (for now, and probably forever barring an enormous paradigm-shift in neuroscience) to do zero-shot decoding (ie without training data on the subject) means that lots of factors from drug use to breathing erratically probably hamper even a trained decoder. Inshallah the techbros don't stumble on a way to remotely sense fmri (or sufficient statistics thereof) at significant distances from moving brains.
rant:
why the fuck do "news" outlets never fucking link to the paper?!?!?!
I see a slim silver lining here.
More than a slim silver lining, according to the summary they can't construct intelligible results unless the subject is cooperative during training and testing. It's not exactly something you can do without at least people's knowledge by the sounds of it.
Indeed and I would think there should be enough intrinsic differences in individual cortical development that without high quality supervised training prediction will forever be impossible, that is, zero or few shot learning just wouldn't work.
they're scared that if you can read primary sources you won't need them to do a mediocre summary
examples of the decoded stimulus are on page 4. really impressive that it was able to figure out various words, but it is still kinda nonsensical.
indeed and maybe I can be optimistic about this that eventually some ultra wealthy shitass with locked-in syndrome might get to talk to their family again (assuming they rigorously trained a model and brain activity isn't subject to semantic drift)
fr. fucking deranged that there is no paper.
though i wonder if beating someone with a wrench is equivalent to training it
Remote sensing fmri would, no shit, be a medical science miracle that would revolutionize medicine - also I don't think the actual physics could work lol. But it would basically be the star trek tricorder.
For sure. Something scarier would be if you don't need fmri but could use some other modality. A researcher at the institution I worked at almost a decade ago had a grant for a project on reconstructing fmri from ultrasound, but that requires direct contact too. I'm keeping my fingers crossed that the skull is thick enough and 70mV magnitude spikes are small enough that there's just inherently no good information that can be recorded at a distance that is sufficient to reconstruct with high fidelity the neural signatures of language or other kinds of conscious thought.
For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.
Just make sure everybody trains a mind reading model of their own.
"Sign up for your free FriendAI account today and control all your devices with your mind!"
It worked for location tracking devices (smartphones) so why not for this?
“Sign up for your free FriendAI account today and control all your devices with your mind!”
You know, I actually know at least one person that would legitimately sign up for that without any second thought, hesitation, or concern about privacy or any other implications.
The person in question once said they would "Hook their brain directly into Google" if they could, and also continued to enthusiastically support Uber even after I explained to her in great detail how awful of a company they are.
She's the kind of person who when she sees a shiny thing, she likes and wants the shiny thing :yea:
This could be kind of a game changer for people with locked in syndrome or similar. If the training is just doing flash cards or something you're giving them a way to rapidly express themselves.
Yeah, that's what I'm thinking too. Would be great for those who got the syndrome fromLe Scaphandre et Le Papillion but aren't lucky enough to have one eyelid still work so they can blink on a letter.
Poor Kamala is going to be usurped by comatose Biden and ChatGPT.
Comatose Biden on the golden throne. Brain scan GPT hooked up and reading his memories of long cars. The secret service custodes burning incense in order to help the algorithm spirit decide which country he wants to bomb next.
Seems pretty straightforward and not that scary to me. You scan your brain's activities to make a "data set" for the program to use as a rubric for interpreting the brain's activity. It's just as "mind reading" as ChatGPT is an "AI."
Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.
Of course, what happens if you ask any of these other LLMs a question for which they have no data?
Pretty neat concept, IMO I wouldn't expect it to effectively communicate any better than having someone who knows you pretty well speaking on your behalf.
If it can get really good at interpreting generally, then allowing the person to "revise" the answer that would be awesome. Just get it 100% accurate at detecting something like "no, that's wrong Mr. Brainbot" or some sort of unique safe word that lets them classify the responses in real time.
Then you could use that feedback to make the current response more accurate and future responses more accurate.
Anyway it's AI so I'm gonna make the safe bet and say the entire thing is actually overfitting.
https://www.nature.com/articles/s41593-023-01304-9
As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder.
Oh whatever, they were probably like "okay now imagine flying. Okay now imagine smelling a toilet. Okay now think about your feet, now think about your head."