:what-the-hell:

In the latest study, published in the journal Nature Neuroscience on Monday, scientists found an AI system called a semantic decoder can translate a person’s brain activity as they listened to a story, or imagined telling a story, into text.

The new tool relies partly on models similar to the ones that power the now-famous AI chatbots – OpenAI’s ChatGPT and Google’s Bard – to convey “the gist” of people’s thoughts from analysing their brain activity.

But unlike many previous such attempts to read people’s minds, scientists said the system does not require subjects to have surgical implants, making the process noninvasive.

...

Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.

For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.

  • 4_AOC_DMT [any]
    ·
    2 years ago

    main comment:

    For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.

    I see a slim silver lining here. fMRI is incredibly noisy and bulky, and the inability of this procedure (for now, and probably forever barring an enormous paradigm-shift in neuroscience) to do zero-shot decoding (ie without training data on the subject) means that lots of factors from drug use to breathing erratically probably hamper even a trained decoder. Inshallah the techbros don't stumble on a way to remotely sense fmri (or sufficient statistics thereof) at significant distances from moving brains.

    rant:

    why the fuck do "news" outlets never fucking link to the paper?!?!?!

    • hexaflexagonbear [he/him]
      ·
      2 years ago

      I see a slim silver lining here.

      More than a slim silver lining, according to the summary they can't construct intelligible results unless the subject is cooperative during training and testing. It's not exactly something you can do without at least people's knowledge by the sounds of it.

      • 4_AOC_DMT [any]
        ·
        2 years ago

        Indeed and I would think there should be enough intrinsic differences in individual cortical development that without high quality supervised training prediction will forever be impossible, that is, zero or few shot learning just wouldn't work.

    • emizeko [they/them]
      ·
      2 years ago

      they're scared that if you can read primary sources you won't need them to do a mediocre summary

    • kristina [she/her]
      ·
      edit-2
      2 years ago

      examples of the decoded stimulus are on page 4. really impressive that it was able to figure out various words, but it is still kinda nonsensical.

      • 4_AOC_DMT [any]
        ·
        edit-2
        2 years ago

        indeed and maybe I can be optimistic about this that eventually some ultra wealthy shitass with locked-in syndrome might get to talk to their family again (assuming they rigorously trained a model and brain activity isn't subject to semantic drift)

    • kristina [she/her]
      ·
      edit-2
      2 years ago

      fr. fucking deranged that there is no paper.

      though i wonder if beating someone with a wrench is equivalent to training it

    • TerminalEncounter [she/her]
      ·
      2 years ago

      Remote sensing fmri would, no shit, be a medical science miracle that would revolutionize medicine - also I don't think the actual physics could work lol. But it would basically be the star trek tricorder.

      • 4_AOC_DMT [any]
        ·
        2 years ago

        For sure. Something scarier would be if you don't need fmri but could use some other modality. A researcher at the institution I worked at almost a decade ago had a grant for a project on reconstructing fmri from ultrasound, but that requires direct contact too. I'm keeping my fingers crossed that the skull is thick enough and 70mV magnitude spikes are small enough that there's just inherently no good information that can be recorded at a distance that is sufficient to reconstruct with high fidelity the neural signatures of language or other kinds of conscious thought.