• RNAi [he/him]
    ·
    edit-2
    4 years ago

    Give me all your data and I will tell you how long will you live, by just discarding everything and just looking at your address and job.

    Machine learning is just "Statistics 3"

  • crime [she/her, any]
    ·
    4 years ago

    If it hasn't gained sentience and tried to kill its creators it's not AI it's just a bunch of if statements

  • Owl [he/him]
    ·
    4 years ago

    The current crop of neural networks and deep learning is going to create some exciting and horrifying new kinds of spam and some interesting art projects. It'll get used in a lot of places where it would have been easier and better to have a statistician look at the data while asking a domain expert to explain things. The approach itself is a dead end that'll cap out in a couple years, but it'll stick around because of all the people/orgs/companies that first considered actually looking at data because of AI hype.

    A technological singularity is fundamentally possible, but not inevitable in the Kurzweilian sense. If someone makes an AI that can produce novel insights into AI algorithms, then AI research progress will accelerate in a wild feedback loop. And that's the only condition - a singularity won't magically happen because we can build a computer with as many transistors as a human brain has neurons, or a neural network that size, or whatever.

    • the_river_cass [she/her]
      ·
      4 years ago

      The current crop of neural networks and deep learning is going to create some exciting and horrifying new kinds of spam and some interesting art projects.

      I'll say. GPT-3 kicks ass at making copypasta:

      spoiler

      "What the fucking did you just fucking say about me, you little bitch? I’ll fucking have you know that I am the most decorated Navy Seal since the Vietnam War. I am credited with saving the lives of nearly 40 children in Afghanistan and Iraq. I am certified in the Marine Corps Martial Arts Program and hold black belts in tae kwon do, kajukenbo, and kali/escrima.

      My sense of hearing is so keen that I can detect my enemy’s movement by the sound of his blood pumping through his veins. I can smell the sweat of my enemy as he holds his breath in my presence. My physical endurance is so extreme that I can swim 100m, with full combat gear, underwater, with 60lbs of weights strapped to my ankles in 52 seconds.

      My eyes are so precise I can shoot a laser from my eyes. I can see dust particles that are smaller than the head of a pin from a mile away. My grip is so powerful I can crush boulders and destroy trees with a flick of my wrist. My IQ is off the charts, so if you have a weak stomach you shouldn’t stand too close to me.

      My rage is so extreme that it cannot be matched in any man. You can flee a M1A1 Abrams main battle tank in 6 directions, but you can’t run from me. I have never lost a fair fight and I’ve won plenty. I have an 8 pack, but not the kind you find in magazines. No, mine is harder and better. My abs are literally harder than diamonds.

      I can’t be seduced by beautiful women because I have a mirror in my bedroom that shows me the truth.

      My rage and strength are so incredible that my every action has the force of 1,000.

      I am the protector of my friends. I am the bringer of justice. I am America’s fucking vengeance. My fist is on a hair trigger and I am not afraid to use it.

      I have been to the depths of the abyss and I am not afraid to face it. In fact, I welcome it. I am a real man. You are merely a weak boy.

      I have been running 25 miles every day, pushing a military spec Humvee in front of me. I know of your ways and I am not impressed. You think you can scare me? You think I won’t fight back? You are WRONG.

      You can’t break my bones because I am immune to all pain. I have never felt it, so I am incapable of breaking. I am a real man, and you are but a mere boy. I will make you eat your own fist before I’m done with you. I am no one’s bitch."

    • Pezevenk [he/him]
      ·
      4 years ago

      Even if it can produce novel insights into AI algorithms, it is not necessary that this won't also cap out eventually. It might be that the feedback loop just stops after a while because making sufficient additional progress would require making a leap that can not be made with technology up to that point. It wouldn't be the first time improving a technology lead to said technology developing even faster thanks to a feedback loop, until it didn't any more.

      • Owl [he/him]
        ·
        4 years ago

        Yeah, it's possible that it caps somewhere.

        Though for however long it lasts, the feedback loop is fundamentally different than normal technological inspiration, since the ability to make new discoveries is improved by the previous discoveries in a way that it wasn't before (plus the way it was before).

    • Veganhydride [he/him]
      ·
      edit-2
      4 years ago

      a singularity won’t magically happen because we can build a computer with as many transistors as a human brain has neurons

      This has already happened (the transistor part I mean)

      or a neural network that size

      We're 20% there

      • dualmindblade [he/him]
        ·
        4 years ago

        We’re 20% there

        Not even close. The largest ANNs have ~200 billion connections and some millions of neurons. Human brain has ~100 trillion / 80 billion. Also a biological neurons are computationally more powerful and the connections between them more complex.

        So we're like .01% there or less, what's really remarkable is how far this actually gets us, it may be we don't need to come close to the complexity of the human brain to build something strictly more intelligent.

        • Veganhydride [he/him]
          ·
          4 years ago

          Do'nt get me wrong, it's an arbitrary calculation, but my numbers were 17 billion parameters (Microsoft DeepSpeed ) / 86 billion neurons

    • dualmindblade [he/him]
      ·
      4 years ago

      The approach itself is a dead end that’ll cap out in a couple years

      How would you characterize "the approach itself"?

  • tofunaut [he/him]
    arrow-down
    4
    ·
    edit-2
    4 years ago

    no the problems is, as always, capitalism

    AI is really, actually getting better. But it's being used to generate profit rather than figure out the most efficient way to give healthcare to people or unionize workplaces.

    I read that Ray Kurzweil book, the Singularity is Near, as a kid and still think it is neat. Eventually we'll have computers that can do "human" tasks better than humans, like write laws or make music, etc. You might have a reaction to that fact like "oh it's beautiful but it was written by a computer so it doesn't count" but like... the computer was made by humans. It's doing the task the humans intended it to do. It's still a human project. We just have to make sure it's owned by society and not a few capitalists.

    • Pezevenk [he/him]
      ·
      edit-2
      4 years ago

      First of all, yeah, it is getting better but I'm not sure it's gonna keep getting significantly better for years to come. The "singularity" requires a leap underestimated by many AI evangelists.

      Second, even if that happens, it is fundamentally wrong to claim AI can make better laws or music than humans. If you want a computer to do something, you gotta quantify what "good" means in some way. How do you train a computer to write "good" music? Even worse, how do you make it right a "good" law? The computer doesn't know what that means. Good at achieving what? Whatever the result the computer gives you, it's only gonna be as good as the idea of the person who trained the computer of the meaning of "good".

      • the_river_cass [she/her]
        arrow-down
        2
        ·
        4 years ago

        it’s only gonna be as good as the idea of the person who trained the computer of the meaning of “good”.

        training isn't this precise any more. they feed datasets that are much too large for humans to actually annotate (like a scraped copy of the entire public internet for the last round of NLP algorithms) and instead look for things like coherence, the capacity of the trained algorithm to learn from a few examples (where the kind of good/bad selection you're thinking of actually takes place), etc., then adjust parameters and try again until they get something stable out (most of the outputs are only good for a few rounds of Q&A before they devolve into incoherence). this difference is supervised learning (what you're thinking of) vs unsupervised learning (what's becoming the only practical way to train many algorithms).

        to see what this feels like in practice, browse this collection of samples pulled out of GPT-3, an algorithm that has only received unsupervised training (the option to give it supervised refinement is not yet available). the people training it most definitely could not have intended much of what shows up here. many of the examples are not great, but there's also way more gems here than you'd expect from something trained only on meta factors, rather than on specific kinds of outputs for specific questions.

        I'll link stuff that stood out to me if anyone is interested.

        • Pezevenk [he/him]
          arrow-down
          1
          ·
          4 years ago

          This doesn't matter. The basic idea is the same. I am not thinking of supervised or unsupervised learning, because the central difficulty is the same. You're describing methods, but behind all the terminology at the end of the day there is still someone making a value judgement at some point in the procedure, no matter how obscured that judgement might be, and that judgement is fundamental to the result you are going to get, no matter how good your algorithm is. Cool, so you have an AI that makes "good" music. Good according to whom? Because whatever the idea of musical value of someone who listens to Five Finger Death Punch 24/7 is, it's probably not my idea.

          Now this isn't so bad when it comes to music. But laws? If you ask 10 people what they think laws should be achieving you'll get 11 different answers, but whatever you decide the right answer is, it's gonna be applied to all of them.

          • the_river_cass [she/her]
            ·
            4 years ago

            there is still someone making a value judgement at some point in the procedure

            sure, I'm saying the values involved are getting increasingly abstract.

            Cool, so you have an AI that makes “good” music. Good according to whom?

            this is why I linked the page of examples. the answer is that it's according to the person asking the question (which is new, it didn't use to be this way).

            Now this isn’t so bad when it comes to music. But laws? If you ask 10 people what they think laws should be achieving you’ll get 11 different answers, but whatever you decide the right answer is, it’s gonna be applied to all of them.

            who's saying AI should write law right now...? I'm pointing out that there's more capacity here than leftists generally give credit for and that that capacity can be used for good and bad ends.

            • Pezevenk [he/him]
              ·
              edit-2
              4 years ago

              who’s saying AI should write law right now…?

              That is what I am saying, it is not a matter of right now, it is a matter of "ever". It is a difficulty not resolved by better algorithms. It is a fundamental difficulty that inherently limits its scope, EVEN if the technology actually has the capacity to get there any time soon, which is not a given, unless AI can evolve to improve AI algorithms significantly which isn't a given either, and even if it does it is again not a given that it won't cap out once more.

              • the_river_cass [she/her]
                ·
                4 years ago

                my objection to AI writing laws isn't really about the technology -- maybe it can get to a point where it might make sense, maybe it can't, but it's immaterial. the politics of the person who says AI should write laws is kind of questionable. the hard part about laws, about politics isn't a technical matter of finding the cleverist solution or whatever, it's the hard work of convincing actual human beings that they should support the law. outsourcing that to AI does nothing to solve that problem except perhaps in a world where we've built a cult around AI and people unquestioningly believe what an AI tells them.

                technology, no matter how clever or powerful, can't solve political problems.

                • Pezevenk [he/him]
                  ·
                  4 years ago

                  Exactly, that is why I believe it to be a fundamental limitation which won't be solved by better technology. I also have a similar reason to disagree with some people who think AI will replace musicians, though there is also other very important factors that people overlook.

      • tofunaut [he/him]
        ·
        edit-2
        4 years ago

        right I think that's the nerdy part, that computer programs will develop their own culture and moral system and become living things without a human specifically telling it to do that

  • truth [they/them]
    arrow-down
    2
    ·
    4 years ago

    Singularity is impossible for thermodynamic / emergence reasons. A machine can't increase its own complexity beyond its hardware limitations. It's like trying to design a Redstone computer that exceeds the flops of the machine it's running on.

  • Sphere [he/him, they/them]
    ·
    edit-2
    4 years ago

    The singularity is BS. Look at it this way: we have tons and tons of people, whose brains are currently the peak of what's possible in terms of general intelligence capable of improving computer programs, working on this exact problem. So, suppose we come up with a general intelligence that is actually comparable to human intelligence (we're nowhere close to that currently). How exactly would this computer change the situation of making slow, incremental progress towards superhuman intelligence (which we're not actually making, mind you; only in very specific tasks are we able to produce programs that can outperform people)?

    In fact, it wouldn't. If you manage to create "superhuman" artificial general intelligence, you don't end up with some kind of program that can do everything a million times better than people. You end up with a program that's marginally better than people at abstract reasoning. So what you've done is create a really smart person, essentially. But we have tons of smart people working on AI now, and it's not progressing exponentially. So it wouldn't bring a Singularity about at all. Kurzweil fell for the classic mistake: assuming that exponential improvements in processing speed and data storage equate to exponential improvements in software, which is completely wrong.

    This is derived from the argument I found in a blog post by an actual AI researcher once, but I did some searching and was unable to find it, so this inferior wording is all I can offer (which is too bad; it was a long and very interesting essay).

    • ElectricMonk [she/her,undecided]
      ·
      edit-2
      4 years ago

      Depending a lot on how it’s made, wether it’s a replica of of brain or something new. An AI could potentially parallelise itself, being able to simultaneously work on hundreds of tasks/thought processes whilst in perfect communication with itself, free of the downsides of large teams. It could also potentially work 24/7 and/or at many times the rate of human thought.

      Could also potentially be free of human vices, self doubt, existential dread and other such things, while artificially motivating itself.

      • Sphere [he/him, they/them]
        ·
        4 years ago

        Communication between highly parallelized processes is actually not as easy as one might think; in fact, it's a major area of research in the field of supercomputing.

        Also, computer clock speeds may be considerably faster than the rate at which neurons fire, but that doesn't mean that computers are inherently faster. One clock cycle is the time is takes a processor to execute one basic instruction, but the brain does many things within a handful of neuron firings that would take a huge number of instructions for a computer to do (one of the best examples is the way brains can just casually ignore all the unnecessary sensory information flowing in while focusing on something specific, as you're doing now by reading only this line of text and ignoring the surrounding environment, which is a feat no AI can perform even given considerably more time to do so). So it is by no means certain that a superhuman AI, if we were to create one, would necessarily be able to operate at a higher speed than ordinary people--indeed, such an achievement would be far more than just an incremental improvement.

        More important than those points, though, is the fact that the field is nowhere near reaching such a milestone. What we have is a bunch of task-specific algorithms that are beginning to outperform humans at their highly specialized tasks (playing Go, for instance), but there is no AI that can possibly perform a basic task that any human can perform that requires any generalized cognitive capabilities (for example, if I show you a picture of a man holding a fish and looking triumphant, you would know that there's water nearby, but to my knowledge no computer can make that deduction).

        If you look at modern CAPTCHAs (CAPTCHA stands for "completely automated public Turing test to tell computers and humans apart"), you'll notice that they involve identifying pictures that contain specific objects whose forms vary widely but are always easily recognizable to humans. So even basic object recognition remains a considerable challenge for modern AIs, let alone drawing associations between the recognized objects and other areas of knowledge, even though both are tasks which are absolutely trivial for a human brain. That should give you an idea of how far we are from anything resembling artificial general intelligence.

        • ElectricMonk [she/her,undecided]
          ·
          edit-2
          4 years ago

          I’m aware that current ‘AI’ is overrated and we are no where near the knowledge or computational power required to make a general AI. I was responding to the idea that having an AI as smart as a human would be no big deal, because once it exists, it can be speed up as computational power allows (maybe depending on ability to parallelise the AI program) and it’s thought processes parallelised. If that’s not possible it can still duplicated and collaborate.

    • kristina [she/her]
      ·
      4 years ago

      ai that constantly monitors workers to check for micro-expressions that may indicate they will unionize and will dispense dick-flattening justice within 2 seconds

  • kristina [she/her]
    ·
    4 years ago

    ai predictions are libshit. we've been using the same models for like 50 years but only recently had the processing power to do anything with them