Frankly, not sure where to begin with this one. Some here have already pointed out how it can easily be trained to produce racist/biased data which is a big red flag to begin with. But am I the only one thinking about how this deep-learning, AI algorithm is going to render millions obsolete at the behest of capital? As has been the case with almost everything developed under capitalism , what a marvelous discovery that will in all likelihood be used for nothing but exploitation. And do I even have to mention how our leaders are geriatric and couldn’t regulate this shit to save their lives?

Unless this is somehow made open-source (and soon), we’re fucked.

  • kissinger
    ·
    edit-2
    1 year ago

    deleted by creator

    • Budwig_v_1337hoven [he/him]
      ·
      2 years ago

      It's not a rigid, preprogrammed decision tree - it's entirely probabilistic, inferring from training data. Still, 'learning' is too generous a term, it's more like... refining its predictions, getting better at what it does. It's getting better at rolling dice, but that's fundamentally all it ever can do.

      • kissinger
        ·
        edit-2
        1 year ago

        deleted by creator

      • drhead [he/him]
        ·
        2 years ago

        'Learning' is a term of abstraction. "Making a probabilistic model of what tokens should go next to each other for a given input" is annoying to say every time. It's the same as when people talk about evolution as if there is design, people who understand evolution will know that when you say "this finch's beak is designed for eating seeds"... it's the same with machine learning.

    • Owl [he/him]
      ·
      2 years ago

      None of the recent wave of AI models continue to learn after being trained. They have a training phase where they "learn" (is it actually learning? boring semantic argument), then they just kind of sit there and do what they do already.

      All the text models work on some variant of "given that the last 1000 letters of the input are X, and the last 1000 letters of my output are Y, what's the most likely next letter?" The model is huge, but nowhere near big enough to be able to memorize all the answers, so it needs to compress the information somehow. Learning words, grammatical rules, and facts about how the world works are all ways to get a more accurate "what's the next letter" in less space than memorizing everything, so a sufficiently big model starts having ways to work with those.

      People are researching where and how the heck those ideas get stored in models, but that's slower and harder and less funded than just chucking even bigger computers at training even bigger models, so we don't really know exactly how it works on the inside.

      Plumbing is really complicated btw, don't sell yourself short.

    • mittens [he/him]
      ·
      2 years ago

      Think of it as a beefier version of the word predictions you get in your smartphone keyboard. Only instead of it being on a word per word basis, it strings a number of predictions together and cobbles a coherent text.