I don’t know how there aren’t a myriad of problems associated with attempting to emulate the brain, especially with the end goal of destroying livelihoods and replacing one indentured servant for another. In fact, that’s what promoted this post- an advertisement for a talk with my alma mater’s philosophy department asking what happens when see LLMs discover phenomenological awareness.
I admit that I don’t have a ton of formal experience with philosophy, but I took one course in college that will forever be etched into my brain. Essentially, my professor explained to us the concept of a neural network and how with more computing power, researchers hope to emulate the brain and establish a consciousness baseline with which to compare a human’s subjective experience.
This didn’t use to be the case, but in a particular sector, most people’s jobs are just showing up a work, getting on a computer, and having whatever (completely unregulated and resource devouring) LLM give them answer they can find themselves, quicker. And shit like neuralink exists and I think the next step will to be to offer that with a chatgpt integration or some dystopian shit.
Call me crazy, but I don’t think humans are as special as we think we are and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer. Hell, we collectively decided to slaughter en masse another collective group with feeling (animals) to appease our tastebuds, a lot of us are thoroughly entrenched into our digital boxes because opting out will result in a loss of items we take for granted, and any discussions on these topics are taboo.
Data-obsessed weirdos are a genuine threat to humanity, consciousness-emulation never should have been a conversation piece in the first place without first understanding its downstream implications. Feeling like a certified Luddite these days
Why does eveyone head to the trenches about consciousness this or that? Why even try to emulate the brain when transformers produce the results they do? All these systems need to do is convincingly approximate human behavior well enough and they can automate most professional jobs out there, and in doing so upend society as we know it. These systems don't need a soul to scab for labor.
What's the observable difference between something that closely emulates human behaviour and something with a "soul"?
What is the observable difference between a rock and a person who keeps their mouth shut? When you only live on the internet they are indistinguishable.
We know a world exists outside of the internet, as far as we know anything.
We might choose to believe in a soul, but with no evidence there's not really any point in bringing it up as a quality something can have.
To say that something does or does not have value because of the presence of a soul is the same as saying that something doesn't have value because I've decided it doesn't have the intangible property of valuableness.
Souls are dumb ideas. All of the suffering, boredom, the heavens and hells are here on earth.
What difference does it make? Does Bob from accounting have a soul? I can't answer that either.
Well I am confident an AI could replicate the kind of alt account activity you like, and isn't that more important? Filling the trough and appearing to be in good company?