I don’t know how there aren’t a myriad of problems associated with attempting to emulate the brain, especially with the end goal of destroying livelihoods and replacing one indentured servant for another. In fact, that’s what promoted this post- an advertisement for a talk with my alma mater’s philosophy department asking what happens when see LLMs discover phenomenological awareness.

I admit that I don’t have a ton of formal experience with philosophy, but I took one course in college that will forever be etched into my brain. Essentially, my professor explained to us the concept of a neural network and how with more computing power, researchers hope to emulate the brain and establish a consciousness baseline with which to compare a human’s subjective experience.

This didn’t use to be the case, but in a particular sector, most people’s jobs are just showing up a work, getting on a computer, and having whatever (completely unregulated and resource devouring) LLM give them answer they can find themselves, quicker. And shit like neuralink exists and I think the next step will to be to offer that with a chatgpt integration or some dystopian shit.

Call me crazy, but I don’t think humans are as special as we think we are and our pure arrogance wouldn’t stop us from creating another self and causing that self to suffer. Hell, we collectively decided to slaughter en masse another collective group with feeling (animals) to appease our tastebuds, a lot of us are thoroughly entrenched into our digital boxes because opting out will result in a loss of items we take for granted, and any discussions on these topics are taboo.

Data-obsessed weirdos are a genuine threat to humanity, consciousness-emulation never should have been a conversation piece in the first place without first understanding its downstream implications. Feeling like a certified Luddite these days

  • Hohsia [he/him]
    hexagon
    ·
    edit-2
    3 days ago

    Yeah sorry I might’ve jumped the gun there, but I’m legitimately starting to contemplate peacing out of corporate America in general because all they can talk about is “using AI to increase productivity” (read as replace workers) and haven’t been able to escape the talk of whatever a human can do can be done by a computer (in the context of replicating monotonous computer-touching tasks).

    I guess the whole sentience vs non is a completely separate one entirely (and a moot one by the sound of it), but that has not stopped the powers that be from dumping as much money possible into these treat printers.

    And after all, this isn’t the first time I haven’t been able to outrun the propaganda. I guess it’s just becoming increasingly more difficult to sift through the bullshit and that’s another reason why I think the question of consciousness won’t matter in the short term. But we do know that consciousness is just the result of activity in the brain (and we can prove this with MRIs on stroke victims and such).