guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
guy recently linked this essay, its old, but i don't think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
I would love to see some studies that you believe show this. I have seen several over the last decade and come to the conclusion that most of these are bunk or just able to recognize one brain signal pattern, and that that pattern actually is indistinguishable from some others (like lamp and basket look nothing the same, but then the brain map for lamp also returns for bus for some reason).
It's not a useful endeavor in my opinion, and using computer experience and languages as a model is a pretty shit model, is my conclusion. More predictive possibilities than psychology, but wildly inaccurate and unable to predict it's innaccuracy. It's good to push back because it's accuracy is wildly inflated by stembros
The fmri ones are probably bunk. That said, if you could manage the heinous act of cw: body gore
spoiler
implanting several thousands of very small wires throughout someone's visual cortex, and record the responses evoked by specific stimuli or instructions to visualize a given stimulus, you could probably produce low fidelity reconstructions of their visual perception
are you familiar with the crimes of Hubel and Weisel?
I am not, and I will look it up in a minute.
But my point is that such a low-fidelity reconstruction, when interpreted through the model of modern computing methods, lacks the accuracy for any application AND, crucially, has absolutely no way to account for and understand its limitations in relation to the intended applications. That last part is a more philosophy of science argument than about some percentage accuracy. It's that the model has no way to understand its limitations because we don't have any idea what those are, and discussion of this is limited to my knowledge, leaving no ceiling for the interpretations and implications.
I think a big difference in positions in this thread though is between those talking about how the best neuroscientists in the world think about this, and about those who are more technologists who never reached that level and want to Frankenstein their way to tech-bro godhood. I'm sure the top neuros get this, and are constantly trying to find new and better models. But their publications don't appear in science journals on the covers