Podcast description: Materialism is dead. There are simply too many questions left unanswered after years of studying the brain. Now, people are scrambling for a new way to understand the mind-body relationship. Cartesian dualism has become a whipping boy in philosophy, but it has advantages over the alternatives. Dr. Joshua Farris discusses Cartesianism and philosophy with Dr. Michael Egnor.
I'd urge you to reconsider some of that and do some research beyond thought terminating gotchas that the rational nihilist debatebro types spout all the time. I used to think like you but then I delved into the topic a bit more into detail and the more I did the less hardline reductionism made sense.
"Consciousness is just what the matter in the neurons does and nothing else" is a tenuous position that's not even all that backed by actual data, it's at best one of the working theories but far from proven beyond reasonable doubt. The only reason it's considered a valid position IMO is because it's en vogue to be a hardline reductionist nihilist in mainstream western scientific circles.
I don't listen to rational nihilist debatebro types, I developed my (rather rudimentary) ontology though practice in science first and and reading Engels, Althusser, Deleuze and others second.
I'm precisely asking you what thought is there beyond it. What real phenomenon can you explain that I cannot. In particular what is the meaning of having "actual data" about consciousness if you say it's beyond the purview of the material world, and therefore above all measurement instruments I can think of.
Consciousness itself.
Reductionism cannot even conceptually close the explanatory gap between the quantitative world of matter and the qualitative world of conscious experience. Even if you get damn near perfect 999 sigma correlations between states of matter and reports of conscious experience the gap is no closer to being closed.
Also Engels would call you a "vulgar materialist" and wouldn't really agree with you there.
Why are you so hostile. I'm not any closer to understanding your point of view. Instead of saying that Engels would dunk on me, why don't you tell me what is it from him that I'm missing?
In your thought experiment, as I understand it, you're proposing that there's a machine or computer simulation that on one hand measures exactly the same as another material thing that harbours consciousness, (Presumably a human brain? Does it make sense to you to locate consciousness in space?), and on the other hand reports itself as conscious and feeling.
If such an experiment were possible—when you say it's 999 sigma you're implying it repeatable in laboratory conditions—indeed I would consider it a complete physical explanation of consciousness. It walks like a duck, it quacks like a duck, it even tells you that it's a duck, what else could possibly be there to explain?
However I doubt that we'll reach anywhere if we keep just talking past each other. In hopes of reaching common ground, I'd appreciate if you picked one of the topics I mentioned a few comments ago and told me what you think about it or how it relates to your ontology:
You started first with comparisons to sacrifice and rain dances but whatever.
My whole point is that there seems to be more to the universe than just matter. I have no strong opinion on what that really means and on what level this other stuff relates to matter but hard physicalism is a dead end IMO.
I'm sorry, that was not my intention. I meant that I found all of those things to be dead ends and things probably everyone in this site disagrees with. Some of them are common sense today but others are still debated, and materialism allows me to disregard them in a quick and grounded way, along with everything else that assumes there's something outside of matter influencing matter in any way.
The whole point of having an ideology to me is to separate the wheat from the chaff and avoid wasting time considering things that ultimately don't matter. I don't see the point of philosophy that does not serve a purpose. If you say that you believe there's something beyond matter but you have no opinion on what that means or how that relates to the world, alright, we might as well agree because it makes no difference.
A practical point of view like that is pretty good for most contexts but IMO when it gets to philosophically tricky stuff like this you kinda have to dig a bit deeper if you want to do your due dilligence.
I'm sorry I was also being kind of a dick.
What I struggle with is how do you find truth in things that are unmeasurable? Like, I assume there are multiple interpretations, how do you pick one over the other? Or is that beyond the point, like, say pure mathematics, where the objective is aesthetic and consistency and not relation to the world.
You're already finding truth in the unmeasurable. The qualities of your experiences are unmeasurable in of themselves, and you believe they exist, right? You can't measure the redness of red, just the wavelength that causes it.
Let's go back to the 999 sigma super precise model of neural correlates. The measurements you have are configurations of neurons and electrical charges and whatnot, the qualities of the consciousness you don't really know because you can't measure them, you just kinda trust the test subject is telling you the truth of them when they report what they feel.
I believe in the qualities of my experiences only to the point that I acknowledge they exist, and I also believe my subjectivity that there's a world out there that I can perceive with my senses (because doing otherwise would make everything meaningless) but I wouldn't really say that I find truth in my personal experiences.
Upon examination I'd say both at work and in everyday life I find truth collectively; I'd even say most people doubt the veracity of their experiences in some level. Say for example you saw an UFO through the window, what would be your reaction? Would you stare and be confident of having experienced an UFO? I think most people would try to take a picture or go to the nearest person and ask them 'do you see that?'.
When researching I do the same thing, first I discuss my findings with my colleagues, and eventually attempt to publish what I experienced and see if someone unrelated to me can agree with my method and maybe even replicate it. Only then can I consider my experience as truth. The problem of course is, what happens when someone disagrees? It is there that measurements become essential.
So it is in others that I separate subjectivity from truth, and I think your example of redness is very appropriate here: the only reason there's even a concept of red is because the vast majority of people have the same chemistry in their eyes and because of that they can agree that blood, a sunset, and a rose all have a quality in common. If this wasn't the case the idea of redness would not exist, and that experience would only be understood as one person finding how a certain thing looks interesting and the other person finding it mundane. Redness, like everything else in consciousness, is mediated by the material world.
What happens beyond the chemistry does not matter at all: whether the next person perceives a red light in the way I experience the green, or (what's most likely) in a completely different way that to me is unknowable, makes no difference to the redness of a rose or the squareness of a square. That redness is a property of the object and not of the subject is evident in that even a colourblind person will know that the rose is red, even if they can't quite tell with their eyes but only be certain thorough other people or machines. Eventually we've managed to replicate the chemistry of the eye, measure colour itself, and transmit any visual experience as pure data over copper wire, in ever-increasing levels of fidelity.
In the super precise model of a brain, remember that the most important property of a model is that it is able to predict behaviour. Once such a model is constructed, you can simulate it in a machine, and this machine will say that it is conscious and it will respond in every way the brain it was modelled after would. The machine will probably be very afraid and need consolation when it learns that its body has very different needs from what they were used to. You're right that I can't put on a graph what being another person is like, but precisely because of that I also won't be able to say that this hypothetical machine is not conscious either. I would for all practical purposes have modelled consciousness.