Frankly, not sure where to begin with this one. Some here have already pointed out how it can easily be trained to produce racist/biased data which is a big red flag to begin with. But am I the only one thinking about how this deep-learning, AI algorithm is going to render millions obsolete at the behest of capital? As has been the case with almost everything developed under capitalism , what a marvelous discovery that will in all likelihood be used for nothing but exploitation. And do I even have to mention how our leaders are geriatric and couldn’t regulate this shit to save their lives?

Unless this is somehow made open-source (and soon), we’re fucked.

  • Hohsia [he/him]
    hexagon
    ·
    2 years ago

    This thing can basically do all entry-level programming and it’s still learning

    • Sphere [he/him, they/them]
      ·
      2 years ago

      As a well-paid software engineer, I'm not the least bit worried. Not only does it actually kinda suck at programming, but more than that, writing actual code is a mere fraction of what I get paid to do. A huge portion of this job is figuring out (or even better, understanding without needing to investigate) what's wrong with the program when it gives bad output. Another huge portion is explaining what the software does, to an appropriate level of detail, to someone who does not understand it (and in many cases doesn't know how to program at all).

    • StellarTabi [none/use name]
      ·
      2 years ago

      From what I understand this thing is actually a lot buggier and more error-prone than copying answers from stackoverflow. People who've made things with it had to spend a lot of time validating and correcting it's output. The time it takes to make something non-trivial would be better spent without trying to use it.

      It's useful in the sense that an AI that produces a picture of a girl with black eyes and a surprise second row of bottom teeth is useful.

      • mittens [he/him]
        ·
        2 years ago

        It's worse because the second row of bottom teeth is obviously wrong, whereas this produces wrong output that seems correct, thus it needs to be verified independently.

    • kissinger
      ·
      edit-2
      1 year ago

      deleted by creator

      • Budwig_v_1337hoven [he/him]
        ·
        2 years ago

        It's not a rigid, preprogrammed decision tree - it's entirely probabilistic, inferring from training data. Still, 'learning' is too generous a term, it's more like... refining its predictions, getting better at what it does. It's getting better at rolling dice, but that's fundamentally all it ever can do.

        • kissinger
          ·
          edit-2
          1 year ago

          deleted by creator

        • drhead [he/him]
          ·
          2 years ago

          'Learning' is a term of abstraction. "Making a probabilistic model of what tokens should go next to each other for a given input" is annoying to say every time. It's the same as when people talk about evolution as if there is design, people who understand evolution will know that when you say "this finch's beak is designed for eating seeds"... it's the same with machine learning.

      • Owl [he/him]
        ·
        2 years ago

        None of the recent wave of AI models continue to learn after being trained. They have a training phase where they "learn" (is it actually learning? boring semantic argument), then they just kind of sit there and do what they do already.

        All the text models work on some variant of "given that the last 1000 letters of the input are X, and the last 1000 letters of my output are Y, what's the most likely next letter?" The model is huge, but nowhere near big enough to be able to memorize all the answers, so it needs to compress the information somehow. Learning words, grammatical rules, and facts about how the world works are all ways to get a more accurate "what's the next letter" in less space than memorizing everything, so a sufficiently big model starts having ways to work with those.

        People are researching where and how the heck those ideas get stored in models, but that's slower and harder and less funded than just chucking even bigger computers at training even bigger models, so we don't really know exactly how it works on the inside.

        Plumbing is really complicated btw, don't sell yourself short.

      • mittens [he/him]
        ·
        2 years ago

        Think of it as a beefier version of the word predictions you get in your smartphone keyboard. Only instead of it being on a word per word basis, it strings a number of predictions together and cobbles a coherent text.

    • HumanBehaviorByBjork [any, undecided]
      ·
      2 years ago

      i mean it's not learning, except in a metaphorical sense. learning is a thing that people do. it's able to answer common beginner programming questions because it's regurgitating answers it's been fed multiple times. that doesn't speak to its ability to solve novel complex problems now, or with more "learning." We've seen a progression from pure nonsense to syntactically valid code, but it doesn't necessarily follow that the next step is correct code.