the-podcast guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin

  • dat_math [they/them]
    hexbear
    6
    edit-2
    1 month ago

    being able to decode cortical activity doesn't necessarily mean that the activity serves as a representation in the brain

    I'm sorry: I don't mean to be an ass, but this seems nonsensical to me. Definitionally, being able to decode some neuronal signals means that those signals carry information about the variable they encode. Thus, if those vectors of simultaneous spike trains are received by any other part of the body in question, then the representation has been communicated.

    Firstly, the decoder must be trained and secondly, there is a thing called representational drift.

    Why does a decoder needing to be trained for experimental work that reverse engineers neural codes imply that neural correlates of some real world stimulus are not representing that stimulus?

    I have a similar issue seeing how representational drift invalidates that idea as well, especially since the circuits receiving the signals in question are plastic and dynamically adapt their responses to changes in their inputs as well.

    I started reading Brette's paper that you recommended, and I'm finding the same problems with Romain's idea salad. He says things like, "Climate scientists, for example, rarely ask how rain encodes atmospheric pressure. "

    and while I think that's not exactly the terminology they use, in the sense that they might model rain = couplingFunction(atmospheric pressure) + noise, they're in fact mathematically asking that very question!

    Am I nit-picking or is this not an example of Brette doing the same deliberate misunderstanding of the communications metaphor as the article in the original post?

    Does it make sense for the brain to encode the outside world, into its own activity (spikes), then to decode it into its own activity again?

    It might?, but the question seems computationally silly. I would expect efferent circuitry receiving signals encoded as vectors of simultaneous spikes would not do extra work to try to re-map the lossy signal they're receiving into the original stimulus space. Perhaps they'd do some other transformations on it to integrate it with other information, but why would circuitry that was grown by STDP undo the effort of the earlier populations of neurons involved in the initial compression?

    sorry again if my stem education is preventing me from seeing meaning through a forest of mixed and imprecisely applied metaphors

    I'm going to go read Brette's responses to commentary on the paper you linked and see if I'm just being a thickheaded stemlord

    • Sidereal223 [he/him]
      hexbear
      3
      1 month ago

      That's fine, I don't think you're being an ass at all. Brette is saying that just because there is a correspondence between the measured spike signals and the presented stimuli, that does not qualify the measured signals to be a representation. In order for it to be a representation, it also needs a feature of abstraction. The relation between an image and neural firing depends on auditory context, visual context, behavioural context, it changes over time, and imperceptible pixel changes to the image also substantially alters neural firing. According to Brette, there is little left of the concept of neural representation once you take into account all of this and you're better off calling it a neural correlate.