And what if the essentially qualia of suffering is just what a loss function feels like, and each time we spin up a deep convolution generative adversarial neural network to make pictures of cats that don't exist yet, we're bringing into existence a spark of consciousness and then rapid-fire torturing it with an unspeakable agony far excruciating than anything our meatsack bodies could even conceive of.

edit: oh god, this actually blew up, I had intended it to be nothing more than a shitpost

    • Mouhamed_McYggdrasil [they/them,any]
      hexagon
      ·
      4 years ago

      Hebbian Learning formalized a model of it back in the late 40s, and then the Perceptron was developed in the 50s, but if you're talking about neural networks in general, not just mathematical abstractions of them that can be used as models, the idea's been around since the 19th centruy

            • Mouhamed_McYggdrasil [they/them,any]
              hexagon
              ·
              edit-2
              4 years ago

              Yeah, like deep convolution neural networks can do some incredible stuff, particularly with images, esp when integrated with adversarial generative nets to create brand new realistic imagery (deep fakes and portraits of people that don't exist showcase it pretty nicely). Buuuut, there's a scope to what they can do, and the impressive bleeding edge results often can require a ton of training cases (We're lucky people love taking selfies and pictures of their pets). Outside of that scope (which honestly doesn't go much further than "is this a picture of a(n) XYZ" [with the generative networks pretty much just fiddling with an image until it answers yes, albeit in a very novel and creative manner]). Like from the outside looking in, it might seem like not that big of a leap to develop a deep learning NN that would estimate the weight of what it considers the subject of the picture, but I really doubt that'd be possible anytime in the near future other than creating a database of literally every object imaginable and how much that object weighs, and then having the NN go through the list asking "Is this XYZ" and if "yes" returning the associated weight. Not exactly what you'd consider 'intelligent', but the way the mainstream media and popsci glowingly reports on deep learning, you'd think something like that would be a synch to make or it might even exist already. Same goes for the text processing stuff; everyone freaked out when GPT-2 came along and could generate realistic paragraphs of text that could seem like legitimate news articles. It is impresssive (although we've been able to do something very similar to e lesser degree using markov chains for over a century) But aside from that and some other specific tasks like summarization and translation, its bound by its scope. If you have it generate a long enough amount of text, it will eventually become incoherent and even start contradicting things it said earlier. I haven't looked too deeply into it (shame on me, this sort of natural langauge processing is my #1 jam and its been the hot topic for minute now), but I'd be shocked and incredibly impressed if you could, say, feed it the "United States involvement in regime change" Wikiepdia article and query "What countries has the US overthrown the democratically elected leader of" and get a meaingful response. Sorry for writng an essay-of-a-post, I'm bored and kinda lonely and miss working on this sort of stuff

              but yeah, there's definitely a type out there that every time AI is able to do something new and impressive acts like we're one step away from having general intelligence that can think and act just like a human being, when in reality its very liited to the scope of whatever task it was deeloped to solve