Frankly, not sure where to begin with this one. Some here have already pointed out how it can easily be trained to produce racist/biased data which is a big red flag to begin with. But am I the only one thinking about how this deep-learning, AI algorithm is going to render millions obsolete at the behest of capital? As has been the case with almost everything developed under capitalism , what a marvelous discovery that will in all likelihood be used for nothing but exploitation. And do I even have to mention how our leaders are geriatric and couldn’t regulate this shit to save their lives?

Unless this is somehow made open-source (and soon), we’re fucked.

  • BabaIsPissed [he/him]
    ·
    2 years ago

    Not in regards to job replacement. Unless it's a very simple task, there's no substitute to having a flesh and blood person actually check the output. Haven't checked chatGPT yet, so I'll use another of their new models as an example: Whisper.

    It is a speech recognition model, and their paper reports a word error rate on par with human annotators, and when you check it out, it really is super impressive. But crucially, it's only on par with unassisted human annotators. It is still worse than a human + machine combo, and IMO that will be the case for the foreseeable future. So yeah, it will be used by itself in cases where quality is not a concern, but in such cases I don't think people would bother hiring someone regards. And that is a relatively simple task, compared to stuff ghouls think gptchat can do (teaching, programming, etc)

    Also in the case of chatGPT specifically, we have to keep in mind that the model was trained to generate text in a conversational style, not be right about stuff. Of course, it can retrieve truthful information about things it saw in the training set. Some people (including openAI folks, if I'm remembering the GPT-2 paper correctly) claim this means that it's actually learning more tasks in addition to text generation, but IMO it's just a really clever digital parrot. It will often be confidently wrong about stuff, in a way that is sometimes hard to detect, at least according to stuff I saw in a recent r/programming thread. Stuff like using keywords that look like they could exist in a language, but don't, or doing something slightly different than what you asked.

    I'm more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse.

    • Budwig_v_1337hoven [he/him]
      ·
      2 years ago

      I’m more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse.

      Very much agree, this stuff is absolutely golden for SEO blogspam pseudo-content

      • supdog [e/em/eir,ey/em]
        ·
        2 years ago

        I think it'll get...better? One outcome is you won't even do a google search. You'll just ask chatgpt. Google search is garbage unless you specify like site: reddit. At this point it barely qualifies as a search engine anymore, just paid product placement.

        Of course that depends on how they decide to monetize it. I could imagine it replacing a lot of what I use google/reddit for IF it's free.

    • spectre [he/him]
      ·
      2 years ago

      I’m more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse

      Guess what all future AI models are gonna be trained on lol

      • Budwig_v_1337hoven [he/him]
        ·
        2 years ago

        google ran into exactly that problem way back when they first tried to improve google translate. Much of the text they scraped was their own output, so it didn't improve the model any further and instead ingrained it's own error patterns deeper. I don't remember exactly how they solved it but iirc they trained another model to detect google translate output to eliminate that from the training set for the generative model.

      • BabaIsPissed [he/him]
        ·
        2 years ago

        Yep, no disagreement about the fine-tuning stuff of course. I actually misremembered some stuff that bothered me about a claim in the paper. I like to annotate stuff as I read and oftentimes I'll complain about something just to have it answered a page or so later.

        TL;DR: I'm dumb

        Our speculation is that a language model with sufficient capacity will begin to learn to infer and perform the tasks demonstrated in natural language sequences in order to better predict them, regardless of their method of procurement.If a language model is able to do this it will be, in effect, performing unsupervised multitask learning.

        Maybe (probably) I'm dumb but I thought: can they really claim that? If a model sees, for example, a bunch of math operations and produces the correct output for such tasks, is it more likely that it picked up in some way what numbers are, what math operators do and how to calculate or that it simply saw ('what is 2+2?', '4') a bunch of times? Can we really say it's like a multitask model where we know for a fact it's optimizing for multiple losses? The catch is that they did some overlap analysis later on and their training set covers at most 13% of a test dataset and the model did pretty well in a zero-shot context for most of the tasks, so seeing the answers in the training set doesn't really explain the performance. So yeah, I guess they can claim that lol.

      • hexaflexagonbear [he/him]
        ·
        2 years ago

        Since BERT the state of the art for almost any NLP task has been taking these pre-trained large language models and fine-tuning them for the specific task you want to do.

        I might be mistaken, but I believe it's more than just fine tuning. It's fine tuning so it picks up on the different context it's getting used in, but foe any non-trivial application there are additional machine learning systems attached to it. So for example drawing based on prompts would have to have a system capable of doing the "draw X in the style of Y" type tasks.