• Sphere [he/him, they/them]
    ·
    edit-2
    4 years ago

    The singularity is BS. Look at it this way: we have tons and tons of people, whose brains are currently the peak of what's possible in terms of general intelligence capable of improving computer programs, working on this exact problem. So, suppose we come up with a general intelligence that is actually comparable to human intelligence (we're nowhere close to that currently). How exactly would this computer change the situation of making slow, incremental progress towards superhuman intelligence (which we're not actually making, mind you; only in very specific tasks are we able to produce programs that can outperform people)?

    In fact, it wouldn't. If you manage to create "superhuman" artificial general intelligence, you don't end up with some kind of program that can do everything a million times better than people. You end up with a program that's marginally better than people at abstract reasoning. So what you've done is create a really smart person, essentially. But we have tons of smart people working on AI now, and it's not progressing exponentially. So it wouldn't bring a Singularity about at all. Kurzweil fell for the classic mistake: assuming that exponential improvements in processing speed and data storage equate to exponential improvements in software, which is completely wrong.

    This is derived from the argument I found in a blog post by an actual AI researcher once, but I did some searching and was unable to find it, so this inferior wording is all I can offer (which is too bad; it was a long and very interesting essay).

    • ElectricMonk [she/her,undecided]
      ·
      edit-2
      4 years ago

      Depending a lot on how it’s made, wether it’s a replica of of brain or something new. An AI could potentially parallelise itself, being able to simultaneously work on hundreds of tasks/thought processes whilst in perfect communication with itself, free of the downsides of large teams. It could also potentially work 24/7 and/or at many times the rate of human thought.

      Could also potentially be free of human vices, self doubt, existential dread and other such things, while artificially motivating itself.

      • Sphere [he/him, they/them]
        ·
        4 years ago

        Communication between highly parallelized processes is actually not as easy as one might think; in fact, it's a major area of research in the field of supercomputing.

        Also, computer clock speeds may be considerably faster than the rate at which neurons fire, but that doesn't mean that computers are inherently faster. One clock cycle is the time is takes a processor to execute one basic instruction, but the brain does many things within a handful of neuron firings that would take a huge number of instructions for a computer to do (one of the best examples is the way brains can just casually ignore all the unnecessary sensory information flowing in while focusing on something specific, as you're doing now by reading only this line of text and ignoring the surrounding environment, which is a feat no AI can perform even given considerably more time to do so). So it is by no means certain that a superhuman AI, if we were to create one, would necessarily be able to operate at a higher speed than ordinary people--indeed, such an achievement would be far more than just an incremental improvement.

        More important than those points, though, is the fact that the field is nowhere near reaching such a milestone. What we have is a bunch of task-specific algorithms that are beginning to outperform humans at their highly specialized tasks (playing Go, for instance), but there is no AI that can possibly perform a basic task that any human can perform that requires any generalized cognitive capabilities (for example, if I show you a picture of a man holding a fish and looking triumphant, you would know that there's water nearby, but to my knowledge no computer can make that deduction).

        If you look at modern CAPTCHAs (CAPTCHA stands for "completely automated public Turing test to tell computers and humans apart"), you'll notice that they involve identifying pictures that contain specific objects whose forms vary widely but are always easily recognizable to humans. So even basic object recognition remains a considerable challenge for modern AIs, let alone drawing associations between the recognized objects and other areas of knowledge, even though both are tasks which are absolutely trivial for a human brain. That should give you an idea of how far we are from anything resembling artificial general intelligence.

        • ElectricMonk [she/her,undecided]
          ·
          edit-2
          4 years ago

          I’m aware that current ‘AI’ is overrated and we are no where near the knowledge or computational power required to make a general AI. I was responding to the idea that having an AI as smart as a human would be no big deal, because once it exists, it can be speed up as computational power allows (maybe depending on ability to parallelise the AI program) and it’s thought processes parallelised. If that’s not possible it can still duplicated and collaborate.