Podcast description: Materialism is dead. There are simply too many questions left unanswered after years of studying the brain. Now, people are scrambling for a new way to understand the mind-body relationship. Cartesian dualism has become a whipping boy in philosophy, but it has advantages over the alternatives. Dr. Joshua Farris discusses Cartesianism and philosophy with Dr. Michael Egnor.
What I struggle with is how do you find truth in things that are unmeasurable? Like, I assume there are multiple interpretations, how do you pick one over the other? Or is that beyond the point, like, say pure mathematics, where the objective is aesthetic and consistency and not relation to the world.
You're already finding truth in the unmeasurable. The qualities of your experiences are unmeasurable in of themselves, and you believe they exist, right? You can't measure the redness of red, just the wavelength that causes it.
Let's go back to the 999 sigma super precise model of neural correlates. The measurements you have are configurations of neurons and electrical charges and whatnot, the qualities of the consciousness you don't really know because you can't measure them, you just kinda trust the test subject is telling you the truth of them when they report what they feel.
I believe in the qualities of my experiences only to the point that I acknowledge they exist, and I also believe my subjectivity that there's a world out there that I can perceive with my senses (because doing otherwise would make everything meaningless) but I wouldn't really say that I find truth in my personal experiences.
Upon examination I'd say both at work and in everyday life I find truth collectively; I'd even say most people doubt the veracity of their experiences in some level. Say for example you saw an UFO through the window, what would be your reaction? Would you stare and be confident of having experienced an UFO? I think most people would try to take a picture or go to the nearest person and ask them 'do you see that?'.
When researching I do the same thing, first I discuss my findings with my colleagues, and eventually attempt to publish what I experienced and see if someone unrelated to me can agree with my method and maybe even replicate it. Only then can I consider my experience as truth. The problem of course is, what happens when someone disagrees? It is there that measurements become essential.
So it is in others that I separate subjectivity from truth, and I think your example of redness is very appropriate here: the only reason there's even a concept of red is because the vast majority of people have the same chemistry in their eyes and because of that they can agree that blood, a sunset, and a rose all have a quality in common. If this wasn't the case the idea of redness would not exist, and that experience would only be understood as one person finding how a certain thing looks interesting and the other person finding it mundane. Redness, like everything else in consciousness, is mediated by the material world.
What happens beyond the chemistry does not matter at all: whether the next person perceives a red light in the way I experience the green, or (what's most likely) in a completely different way that to me is unknowable, makes no difference to the redness of a rose or the squareness of a square. That redness is a property of the object and not of the subject is evident in that even a colourblind person will know that the rose is red, even if they can't quite tell with their eyes but only be certain thorough other people or machines. Eventually we've managed to replicate the chemistry of the eye, measure colour itself, and transmit any visual experience as pure data over copper wire, in ever-increasing levels of fidelity.
In the super precise model of a brain, remember that the most important property of a model is that it is able to predict behaviour. Once such a model is constructed, you can simulate it in a machine, and this machine will say that it is conscious and it will respond in every way the brain it was modelled after would. The machine will probably be very afraid and need consolation when it learns that its body has very different needs from what they were used to. You're right that I can't put on a graph what being another person is like, but precisely because of that I also won't be able to say that this hypothetical machine is not conscious either. I would for all practical purposes have modelled consciousness.