And what if the essentially qualia of suffering is just what a loss function feels like, and each time we spin up a deep convolution generative adversarial neural network to make pictures of cats that don't exist yet, we're bringing into existence a spark of consciousness and then rapid-fire torturing it with an unspeakable agony far excruciating than anything our meatsack bodies could even conceive of.

edit: oh god, this actually blew up, I had intended it to be nothing more than a shitpost

  • Koa_lala [he/him]
    ·
    edit-2
    4 years ago

    I don't get why people treat consciousness as some sacred magic. It probably is just the fact you can be aware of your senses in real time by contextualizing it using your memories. I must admit I am the smoothest of smooth brains. I am hardly even consciousness myself. So it's just arrogant conjecture on my part.

  • dolphinhuffer [comrade/them]
    ·
    4 years ago

    MFW people still consistently fail to realize that this 'Neural Network' techno-spiritualism was what the Rockefellers' pet Nazis were pushing in the late 1940s.

      • dolphinhuffer [comrade/them]
        ·
        edit-2
        4 years ago

        Britain didn't even want the Nuremberg Trials to happen and were very vocal about it. Plenty of American Big Whites didn't want them to happen, either : bad press for the completely insane beliefs of many Anglo eugenics institutions. The word 'Eugenics' appears nowhere in the proceedings. Real heads will know it's hard to go too deep into the documented historical facts around why this was without sounding like one is on some David Icke shite.

        • Mouhamed_McYggdrasil [they/them,any]
          hexagon
          ·
          4 years ago

          Hebbian Learning formalized a model of it back in the late 40s, and then the Perceptron was developed in the 50s, but if you're talking about neural networks in general, not just mathematical abstractions of them that can be used as models, the idea's been around since the 19th centruy

                • Mouhamed_McYggdrasil [they/them,any]
                  hexagon
                  ·
                  edit-2
                  4 years ago

                  Yeah, like deep convolution neural networks can do some incredible stuff, particularly with images, esp when integrated with adversarial generative nets to create brand new realistic imagery (deep fakes and portraits of people that don't exist showcase it pretty nicely). Buuuut, there's a scope to what they can do, and the impressive bleeding edge results often can require a ton of training cases (We're lucky people love taking selfies and pictures of their pets). Outside of that scope (which honestly doesn't go much further than "is this a picture of a(n) XYZ" [with the generative networks pretty much just fiddling with an image until it answers yes, albeit in a very novel and creative manner]). Like from the outside looking in, it might seem like not that big of a leap to develop a deep learning NN that would estimate the weight of what it considers the subject of the picture, but I really doubt that'd be possible anytime in the near future other than creating a database of literally every object imaginable and how much that object weighs, and then having the NN go through the list asking "Is this XYZ" and if "yes" returning the associated weight. Not exactly what you'd consider 'intelligent', but the way the mainstream media and popsci glowingly reports on deep learning, you'd think something like that would be a synch to make or it might even exist already. Same goes for the text processing stuff; everyone freaked out when GPT-2 came along and could generate realistic paragraphs of text that could seem like legitimate news articles. It is impresssive (although we've been able to do something very similar to e lesser degree using markov chains for over a century) But aside from that and some other specific tasks like summarization and translation, its bound by its scope. If you have it generate a long enough amount of text, it will eventually become incoherent and even start contradicting things it said earlier. I haven't looked too deeply into it (shame on me, this sort of natural langauge processing is my #1 jam and its been the hot topic for minute now), but I'd be shocked and incredibly impressed if you could, say, feed it the "United States involvement in regime change" Wikiepdia article and query "What countries has the US overthrown the democratically elected leader of" and get a meaingful response. Sorry for writng an essay-of-a-post, I'm bored and kinda lonely and miss working on this sort of stuff

                  but yeah, there's definitely a type out there that every time AI is able to do something new and impressive acts like we're one step away from having general intelligence that can think and act just like a human being, when in reality its very liited to the scope of whatever task it was deeloped to solve

  • Zoift [he/him]
    ·
    4 years ago

    It may be a spark of consciousness, but I couldn't imagine it's sympathizable. It's coming from an entirely different architecture, shaped by an entirely different set of evolutionary pressures. It's conception of "self", "goals", or "pain/pleasure" may literally be unthinkable to us.

    We can barely communicate with chimps or bonobos on any meaningful level, and they have 2.5 billion years of common devtime. What hope do we have to share a meaning with our new gestalt friends?

  • Pezevenk [he/him]
    ·
    edit-2
    4 years ago

    Oh my god I remember who you are, you are the weird quantum code chaos theory person. Pseudointellectual technobabble is a fuck.

        • science_pope [any]
          ·
          4 years ago

          Sorry, you're right. It's not possible to prove. But that's true of people, too, so you're still just left with intersubjectively verifiable qualia, eh?

            • science_pope [any]
              ·
              4 years ago

              I think we're pretty much on the same page, here.

              I’d say, the further away we get from humans, (to other animals, and to non-animals like computer programs) the question is more tricky.

              I agree. But you can ask them.

  • KurdKobein [any]
    ·
    4 years ago

    There's a popular intuition that an organism's ability to suffer is proportional to it's complexity. If C. elegans with its 302 neurons is experiencing as much suffering when you accidentally step on it while jogging in a park as a human being crushed to death, that would fuck up a lot of people's conceptions of morality.

      • ABigguhPizzahPieh [none/use name,any]
        ·
        edit-2
        4 years ago

        I would say likely not. Let's look at a plant - what would be the purpose of pain to an organism that can't move away from the source of pain? If I touch a knifes edge, my hand recoils back from the pain. If you cut a plant, it doesn't recoil. "Pain" would serve it no purpose because it can't move. If C. elegans touches a knifes edge and it also recoils back, but does it experience pain the way we do if it doesn't have a mind or is it simply responding to stimulus? I think it's simply responding to stimulus

  • KurdKobein [any]
    ·
    4 years ago

    I mean you can lose function like your hand becoming numb or getting too drunk to walk or whatever without experiencing any suffering, so this whole "lose of function = suffering" doesn't track.

    One dude went hiking alone for a couple of days and when he returned it he found out that he lost his ability to understand language. Turns out he suggested a small stroke during his trip and he didn't even notice.